Dissertations / Theses on the topic 'Domain'

To see the other types of publications on this topic, follow the link: Domain.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Domain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hamrin, Göran. "Effective Domains and Admissible Domain Representations." Doctoral thesis, Uppsala University, Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5883.

Full text
Abstract:

This thesis consists of four papers in domain theory and a summary. The first two papers deal with the problem of defining effectivity for continuous cpos. The third and fourth paper present the new notion of an admissible domain representation, where a domain representation D of a space X is λ-admissible if, in principle, all other λ-based domain representations E of X can be reduced to X via a continuous function from E to D.

In Paper I we define a cartesian closed category of effective bifinite domains. We also investigate the method of inducing effectivity onto continuous cpos via projection pairs, resulting in a cartesian closed category of projections of effective bifinite domains.

In Paper II we introduce the notion of an almost algebraic basis for a continuous cpo, showing that there is a natural cartesian closed category of effective consistently complete continuous cpos with almost algebraic bases. We also generalise the notion of a complete set, used in Paper I to define the bifinite domains, and investigate what closure results that can be obtained.

In Paper III we consider admissible domain representations of topological spaces. We present a characterisation theorem of exactly when a topological space has a λ-admissible and κ-based domain representation. We also show that there is a natural cartesian closed category of countably based and countably admissible domain representations.

In Paper IV we consider admissible domain representations of convergence spaces, where a convergence space is a set X together with a convergence relation between nets on X and elements of X. We study in particular the new notion of weak κ-convergence spaces, which roughly means that the convergence relation satisfies a generalisation of the Kuratowski limit space axioms to cardinality κ. We show that the category of weak κ-convergence spaces is cartesian closed. We also show that the category of weak κ-convergence spaces that have a dense, λ-admissible, κ-continuous and α-based consistently complete domain representation is cartesian closed when α ≤ λ ≥ κ. As natural corollaries we obtain corresponding results for the associated category of weak convergence spaces.

APA, Harvard, Vancouver, ISO, and other styles
2

Hamrin, Göran. "Effective domains and admissible domain representations /." Uppsala : Department of Mathematics, Uppsala University [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kucheruk, Liliya. "Modern English Legal Terminology : linguistic and cognitive aspects." Thesis, Bordeaux 3, 2013. http://www.theses.fr/2013BOR30016/document.

Full text
Abstract:
La présente étude intitulée «Terminologie juridique moderne de la langue anglaise: aspects linguistiques et cognitifs » aborde le langage juridique contemporain dans le cadre de la linguistique cognitive. Les objectifs de l'étude sont d'étudier les particularités de la terminologie juridique et de proposer des principes de systématisation, en se référant à la théorie cognitive de la métaphore. Il s’agit principalement : 1) de déterminer les concepts de base utilisés métaphoriquement dans la langue juridique ; 2) d'établir les correspondances principales entre domaines et les corrélations entre des éléments particuliers dans des domaines spécifiques. Pour répondre à cette question, un corpus d’anglais juridique a été constitué et soumis à une étude quantitative. Les expressions métaphoriques liées à la terminologie juridique ont été retirés et classés selon leur sens métaphorique. Il est ainsi apparu que les métaphores conceptuelles de la GUERRE, de la MEDECINE, du SPORT et de la CONSTRUCTION étaient les plus nombreuses et prégnantes en anglais juridique. Les projections et correspondances entre ces domaines sources et le domaine cible de la LOI ont été établies.Cette étude empirique repose sur 156 textes juridiques qui ont été rassemblés au sein d’un même corpus (COLE – Corpus of Legal English). Les sources renvoient à différentes catégories thématiques. Le corpus a été utilisé pour établir la réalité de certains phénomènes et interpréter les résultats quantitatifs dans le cadre de la théorie de la métaphore conceptuelle
The present doctoral dissertation entitled “Modern English Legal Terminology: linguistic and cognitive aspects” investigates the contemporary legal idiom, from a cognitive linguistics perspective. The aim of this study is to map out the peculiarities of English legal terminology and develop principles of systematization, within the framework of conceptual metaphor theory. This means 1) determining the basic concepts used metaphorically in English legal language, and 2) establishing the main cross-domain mappings and correlations between separate items within concrete domains.The Corpus of Legal English (COLE) was set up and a quantitative analysis performed, in which metaphorical expressions related to legal terminology were searched for and classified on the basis of meanings, conceptual domains and mappings. Thus, the conceptual metaphors of WAR, MEDICINE, SPORT and CONSTRUCTION were found to be the most numerous and valuable in Legal English. The main cross-domain mappings between these source domains and the target domain of LAW were established.In order to carry out this data-driven study, 156 legal texts were selected and compiled into the Corpus of Legal English (COLE). The source-texts represent various thematic categories. The COLE was systematically used to interpret frequency counts from the point of view of conceptual metaphor theory
Дисертаційне дослідження на тему «Сучасна англійська юридична термінологія: лінгвокогнитивний аспект» досліджує сучасну мову права з точки зору когнітивної лінгвістики. Головною метою дослідження було дослідження особливостей англійської юридичної термінології та принципів її систематизації з точки зору когнітивної теорії і власне теорії концептуальної метафори. В ході написання роботи були поставлені наступні цілі: 1) визначити головні концепти які використовуються у якості метафор в англійській мові права; 2) встановити головні концептуальні зв’язки між окремими елементами доменів.З метою вирішення цих питань і задач був проведений кількісний аналіз корпусу юридичної англійської мови. В ході цього аналізу біли виділені та класифіковані метафоричні вирази які пов’язані з юридичною термінологією згідно їх метафоричного значення. В результаті аналізу було виявлено що концептуальні метафори WAR, MEDICINE, SPORT та CONSTRUCTION займають домінуюче положення в мові права. Також були встановлені основні концептуальні зв’язки між сферою-джерелом та сферою-ціллю.В даному дослідженні було використано спеціально створений корпус, який включає в себе 156 правових текстів різноманітної сюжетної направленості, для проведення кількісного аналізу з точки зору концептуальної метафори
APA, Harvard, Vancouver, ISO, and other styles
4

Comitz, Paul H. "A Domain-Specific Language for Aviation Domain Interoperability." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/122.

Full text
Abstract:
Modern information systems require a flexible, scalable, and upgradeable infrastructure that allows communication and collaboration between heterogeneous information processing and computing environments. Aviation systems from different organizations often use differing representations and distribution policies for the same data and messages, limiting interoperability and collaboration. Although this problem is conceptually straightforward, information exchange is error prone, often dramatically underestimated, and unexpectedly complex. In the air traffic domain, complexity is often the result of the many different uncoordinated information processing environments that are used. The complexity and variation in information processing environments results in a barrier between domain practitioners and the engineers that build the information systems. These divisions have contributed to development challenges on high profile systems such as the FAA's Advanced Automation System and the FBI's Virtual Case File. Operationally, difficulties in sharing information have contributed to significant coordination challenges between organizations. These coordination problems are evident in events such as the response to Hurricane Katrina, the October 2009 Northwest Airlines flight that overflew its scheduled destination by more than 100 miles, and other incidents requiring coordination between multiple organizations. To address interoperability in the aviation domain, a prototype Domain-Specific Language (DSL) for aviation data, an aviation metadata repository, and a data generation capability was designed and implemented. These elements provide the capability to specify and generate data for use in the aviation domain. The DSL was designed to allow the domain practitioner to participate in dynamic information exchange without being burdened by the complexities of information technology and organizational policy. The DSL provides the capability to specify and generate information system usable representations of aviation data. Data is generated according to the representational details stored in the aviation metadata repository. The combination of DSL, aviation metadata repository, and data generation provide the capability for aviation systems to interoperate, enabling collaboration, information sharing, and coordination.
APA, Harvard, Vancouver, ISO, and other styles
5

Sankaran, Krishnaswamy. "Accurate domain truncation techniques for time-domain conformal methods /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ding, Ziwei. "Domain functions and domain interactions of CTP, phosphocholine cytidylyltransferase." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0023/MQ51332.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El, Boukkouri Hicham. "Domain adaptation of word embeddings through the exploitation of in-domain corpora and knowledge bases." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG086.

Full text
Abstract:
Il existe, à la base de la plupart des systèmes de TAL, des représentations numériques appelées « plongements lexicaux » qui permettent à la machine de traiter, d'interagir avec et, dans une certaine mesure, de comprendre le langage humain. Ces plongements lexicaux nécessitent une quantité importante de textes afin d'être entraînés correctement, ce qui conduit souvent les praticiens du TAL à collecter et fusionner des textes provenant de sources multiples, mélangeant souvent différents styles et domaines (par exemple, des encyclopédies, des articles de presse, des articles scientifiques, etc.). Ces corpus dits du « domaine général » sont aujourd'hui la base sur laquelle s'entraînent la plupart des plongements lexicaux, limitant fortement leur utilisation dans des domaines plus spécifiques. En effet, les « domaines spécialisés » comme le domaine médical manifestent généralement assez de spécificités lexicales, sémantiques et stylistiques (par exemple, l'utilisation d'acronymes et de termes techniques) pour que les plongements lexicaux généraux ne soient pas en mesure de les représenter efficacement. Dans le cadre de cette thèse, nous explorons comment différents types de ressources peuvent être exploités afin soit d’entraîner de nouveaux plongements spécialisés, soit de spécialiser davantage des représentations préexistantes. Plus précisément, nous étudions d'abord comment des corpus de textes peuvent être utilisés à cette fin. En particulier, nous montrons que la taille du corpus ainsi que son degré de similarité au domaine d’intérêt jouent un rôle important dans ce processus puis proposons un moyen de tirer parti d'un petit corpus du domaine cible afin d’obtenir de meilleurs résultats dans des contextes à faibles ressources. Ensuite, nous abordons le cas des modèles de type BERT et observons que les vocabulaires généraux de ces modèles conviennent mal aux domaines spécialisés. Cependant, nous montrons des résultats indiquant que des modèles formés à l'aide de tels vocabulaires peuvent néanmoins être comparables à des systèmes entièrement spécialisés et utilisant des vocabulaires du domaine du domaine, ce qui nous amène à la conclusion que le ré-entraînement de modèles du domaine général est une approche tout à fait efficace pour construire des systèmes spécialisés. Nous proposons également CharacterBERT, une variante de BERT capable de produire des représentations de mots entiers en vocabulaire ouvert via la consultation des caractères de ces mots. Nous montrons des résultats indiquant que cette architecture conduit à une amélioration des performances dans le domaine médical tout en étant plus robuste aux fautes d'orthographe. Enfin, nous étudions comment des ressources externes sous forme de bases de connaissances et ontologies du domaine peuvent être exploitées pour spécialiser des représentations de mots préexistantes. Dans ce cadre, nous proposons une approche simple qui consiste à construire des représentations denses de bases de connaissances puis à combiner ces ``vecteurs de connaissances’’ avec les plongements lexicaux cibles. Nous généralisons cette approche et proposons également des Modules d'Injection de Connaissances, de petites couches neuronales permettant l'intégration de représentations de connaissances externes au sein des couches cachées de modèles à base de Transformers. Globalement, nous montrons que ces approches peuvent conduire à de meilleurs résultats, cependant, nous avons l'intuition que ces performances finales dépendent en fin de compte de la disponibilité de connaissances pertinentes pour la tâche cible au sein des bases de connaissances considérées. Dans l'ensemble, notre travail montre que les corpus et bases de connaissances du domaine peuvent être utilisés pour construire de meilleurs plongements lexicaux en domaine spécialisé. Enfin, afin de faciliter les recherches futures sur des sujets similaires, nous publions notre code et partageons autant que possible nos modèles pré-entraînés
There are, at the basis of most NLP systems, numerical representations that enable the machine to process, interact with and—to some extent—understand human language. These “word embeddings” come in different flavours but can be generally categorised into two distinct groups: on one hand, static embeddings that learn and assign a single definitive representation to each word; and on the other, contextual embeddings that instead learn to generate word representations on the fly, according to a current context. In both cases, training these models requires a large amount of texts. This often leads NLP practitioners to compile and merge texts from multiple sources, often mixing different styles and domains (e.g. encyclopaedias, news articles, scientific articles, etc.) in order to produce corpora that are sufficiently large for training good representations. These so-called “general domain” corpora are today the basis on which most word embeddings are trained, greatly limiting their use in more specific areas. In fact, “specialized domains” like the medical domain usually manifest enough lexical, semantic and stylistic idiosyncrasies (e.g. use of acronyms and technical terms) that general-purpose word embeddings are unable to effectively encode out-of-the-box. In this thesis, we explore how different kinds of resources may be leveraged to train domain-specific representations or further specialise preexisting ones. Specifically, we first investigate how in-domain corpora can be used for this purpose. In particular, we show that both corpus size and domain similarity play an important role in this process and propose a way to leverage a small corpus from the target domain to achieve improved results in low-resource settings. Then, we address the case of BERT-like models and observe that the general-domain vocabularies of these models may not be suited for specialized domains. However, we show evidence that models trained using such vocabularies can be on par with fully specialized systems using in-domain vocabularies—which leads us to accept re-training general domain models as an effective approach for constructing domain-specific systems. We also propose CharacterBERT, a variant of BERT that is able to produce word-level open-vocabulary representations by consulting a word's characters. We show evidence that this architecture leads to improved performance in the medical domain while being more robust to misspellings. Finally, we investigate how external resources in the form of knowledge bases may be leveraged to specialise existing representations. In this context, we propose a simple approach that consists in constructing dense representations of these knowledge bases then combining these knowledge vectors with the target word embeddings. We generalise this approach and propose Knowledge Injection Modules, small neural layers that incorporate external representations into the hidden states of a Transformer-based model. Overall, we show that these approaches can lead to improved results, however, we intuit that this final performance ultimately depends on whether the knowledge that is relevant to the target task is available in the input resource. All in all, our work shows evidence that both in-domain corpora and knowledge may be used to construct better word embeddings for specialized domains. In order to facilitate future research on similar topics, we open-source our code and share pre-trained models whenever appropriate
APA, Harvard, Vancouver, ISO, and other styles
8

Hitchins, Matthew G. "Domain Disparity| Informing the Debate between Domain-General and Domain-Specific Information Processing in Working Memory." Thesis, The George Washington University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10607221.

Full text
Abstract:

Working memory is a collection of cognitive resources that allow for the temporary maintenance and manipulation of information. This information can then be used to accomplish task goals in a variety of different contexts. To do this, the working memory system is able to process many different kinds of information using resources dedicated to the processing of those specific types of information. This processing is modulated by a control component which is responsible for guiding actions in the face of interference. Recently, the way in which working memory handles the processing of this information has been the subject of debate. Specifically, current models of working memory differ in their conceptualization of its functional architecture and the interaction between domain-specific storage structures and domain-general control processes. Here, domain-specific processing is when certain components of a model are dedicated to processing certain kinds of information, be it spatial or verbal. Domain-general processing is a when a component of a model can process multiple kinds of information. One approach conceptualizes working memory as consisting of various discrete components that are dedicated to processing specific kinds of information. These multiple component models attempt to explain how domain-specific storage structures are coordinated by a domain-general control mechanism. They also predict that capacity variations in those domain-specific storage structures can directly affect the performance of the domain-general control mechanism. Another approach focuses primarily on the contributions of a domain-general control mechanism to behavior. These controlled attention approaches collapse working memory and attention and propose that a domain-general control mechanism is the primary source of individual differences. This means that variations in domain-specific storage structures are not predicted to affect the functioning of the domain-general control mechanism. This dissertation will make the argument that conceptualizing working memory as either domain-specific or domain-general creates a false dichotomy. To do this, different ways of measuring working memory capacity will first be discussed. That discussion will serve as a basis for understanding the differences, and similarities between both models. A more detailed exposition of both the multiple component model and controlled attention account will follow. Behavioral and physiological evidence will accompany the descriptions of both models. The emphasis of the evidence presented here will be on load effects: observed changes in task performance when information is maintained in working memory. Load effects can be specific to the type of information being maintained (domain-specific), or occur regardless of information type (domain-general). This dissertation will demonstrate how the two models fail to address evidence for both domain-specific and domain-general load effects. Given these inadequacies, a new set of experiments will be proposed that will seek to demonstrate both domain-specific and domain-general effects within the same paradigm. Being able to demonstrate both these effects will go some way towards accounting for the differing evidence presented in the literature. A brief conceptualization of a possible account to explain these effects will then be discussed. Finally, future directions for research will be described.

APA, Harvard, Vancouver, ISO, and other styles
9

Scheuffgen, Kristina. "Domain-general and domain-specific deficits in autism and dyslexia." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gale, Andrew J. (Andrew John). "Protein-RNA domain-domain interactions in a tRNA sythetase system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/39369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stärk, Martin [Verfasser]. "Control of magnetic domains and domain walls by themal gradients / Martin Stärk." Konstanz : Bibliothek der Universität Konstanz, 2016. http://d-nb.info/1111565201/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jaber, Carine. "Algorithmic approaches to Siegel's fundamental domain." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK006/document.

Full text
Abstract:
Siegel détermina un domaine fondamental à l'aide de la réduction de Minkowski des formes quadratiques. Il donna tous les détails concernant ce domaine pour le genre 1. C'est la détermination du domaine fondamental de Minkowski présentée comme deuxième condition et la condition maximal height présentée comme troisième condition, qui empêchent la précision exacte de ce domaine pour le cas général. Les derniers résultats ont été obtenus par Gottschling pour le genre 2 en 1959. Elle est depuis restée inexplorée et mal comprise notamment les différents domaines de Minkowski. Afin d'identifier ce domaine fondamental pour le genre 3, nous présentons des résultats concernant sa troisième condition. Chaque fonction abélienne peut être écrite en termes de fonctions rationnelles des fonctions thêta et de leurs dérivées. Cela permet l'expression de la solution des systèmes intégrables en fonction des fonctions thêta. Ces solutions sont pertinentes dans la description de surfaces de vagues d'eau, de l'optique non linéaire. Deconinck et Van Hoeij ont éveloppé et mis en oeuvre des algorithmes pour construire la matrice de Riemann et Deconinck et al. ont développé le calcul des fonctions thêta correspondantes. Deconinck et al. ont utilisé l'algorithme de Siegel pour atteindre approximativement le domaine fondamental de Siegel et ont adopté l'algorithme LLL pour trouver le vecteur le plus court. Alors que nous utilisons ici un nouvel algorithme de réduction de Minkowski jusqu'à dimension 5 et une détermination exacte du vecteur le plus court pour des dimensions supérieures
Siegel determined a fundamental domain using the Minkowski reduction of quadratic forms. He gave all the details concerning this domain for genus 1. It is the determination of the Minkowski fundamental domain presented as the second condition and the maximal height condition, presented as the third condition, which prevents the exact determination of this domain for the general case. The latest results were obtained by Gottschling for the genus 2 in 1959. It has since remained unexplored and poorly understood, in particular the different regions of Minkowski reduction. In order to identify Siegel's fundamental domain for genus 3, we present some results concerning the third condition of this domain. Every abelian function can be written in terms of rational functions of theta functions and their derivatives. This allows the expression of solutions of integrable systems in terms of theta functions. Such solutions are relevant in the description of surface water waves, non linear optics. Because of these applications, Deconinck and Van Hoeij have developed and implemented al-gorithms for computing the Riemann matrix and Deconinck et al. have developed the computation of the corresponding theta functions. Deconinck et al. have used Siegel's algorithm to approximately reach the Siegel fundamental domain and have adopted the LLL reduction algorithm to nd the shortest lattice vector. However, we opt here to use a Minkowski algorithmup to dimension 5 and an exact determination of the shortest lattice vector for greater dimensions
APA, Harvard, Vancouver, ISO, and other styles
13

Liakata, Maria. "Inducing domain theories." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Battenfeld, Ingo. "Topological domain theory." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2214.

Full text
Abstract:
This thesis presents Topological Domain Theory as a powerful and flexible framework for denotational semantics. Topological Domain Theory models a wide range of type constructions and can interpret many computational features. Furthermore, it has close connections to established frameworks for denotational semantics, as well as to well-studied mathematical theories, such as topology and computable analysis. We begin by describing the categories of Topological Domain Theory, and their categorical structure. In particular, we recover the basic constructions of domain theory, such as products, function spaces, fixed points and recursive types, in the context of Topological Domain Theory. As a central contribution, we give a detailed account of how computational effects can be modelled in Topological Domain Theory. Following recent work of Plotkin and Power, who proposed to construct effect monads via free algebra functors, this is done by showing that free algebras for a large class of parametrised equational theories exist in Topological Domain Theory. These parametrised equational theories are expressive enough to generate most of the standard examples of effect monads. Moreover, the free algebras in Topological Domain Theory are obtained by an explicit inductive construction, using only basic topological and set-theoretical principles. We also give a comparison of Topological and Classical Domain Theory. The category of omega-continuous dcpos embeds into Topological Domain Theory, and we prove that this embedding preserves the basic domain-theoretic constructions in most cases. We show that the classical powerdomain constructions on omega-continuous dcpos, including the probabilistic powerdomain, can be recovered in Topological Domain Theory. Finally, we give a synthetic account of Topological Domain Theory. We show that Topological Domain Theory is a specific model of Synthetic Domain Theory in the realizability topos over Scott's graph model. We give internal characterisations of the categories of Topological Domain Theory in this realizability topos, and prove the corresponding categories to be internally complete and weakly small. This enables us to show that Topological Domain Theory can model the polymorphic lambda-calculus, and to obtain a richer collection of free algebras than those constructed earlier. In summary, this thesis shows that Topological Domain Theory supports a wide range of semantic constructions, including the standard domain-theoretic constructions, computational effects and polymorphism, all within a single setting.
APA, Harvard, Vancouver, ISO, and other styles
15

Gharib, Hamid. "Domain data typing." Thesis, University of Newcastle Upon Tyne, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hackel, Benjamin Joseph. "Fibronectin domain engineering." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/57701.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2009.
Vita. Cataloged from PDF version of thesis.
Includes bibliographical references.
Molecular recognition reagents are a critical component of targeted therapeutics, in vivo and in vitro diagnostics, and biotechnology applications such as purification, detection, and crystallization. Antibodies have served as the gold standard binding molecule because of their high affinity and specificity and, historically, because of their ability to be generated by immunization. However, antibodies suffer from several shortcomings that hinder their production and reduce their efficacy in a breadth of applications. The tenth type III domain of human fibronectin provides a small, stable, single-domain, cysteine-free protein scaffold upon which molecular recognition capability can be engineered. In the current work, we provide substantial improvements in each phase of protein engineering through directed evolution and develop a complete platform for engineering high affinity binders based on the fibronectin domain. Synthetic combinatorial library design is substantially enhanced through extension of diversity to include three peptide loops with inclusion of loop length diversity. The efficiency of sequence space search is improved by library focusing with tailored diversity for structural bias and binding capacity. Evolution of lead clones was substantially improved through development of recursive dual mutagenesis in which each fibronectin gene is subtly mutated or the binding loops are aggressively mutated and shuffled. This engineering platform enables robust generation of high affinity binders to a multitude of targets. Moreover, the development of this technology is directly applicable to other protein engineering campaigns and advances the scientific understanding of molecular recognition. Binders were engineered to tumor targets carcinoembryonic antigen, CD276, and epidermal growth factor receptor as well as biotechnology targets human serum albumin and goat, mouse, and rabbit immunoglobulin G. Binders have demonstrated utility in affinity purification, laboratory detection, and cellular labeling and delivery. Of particular interest, a panel of domains was engineered that bind multiple epitopes of epidermal growth factor receptor. Select non-competitive heterobivalent combinations of binders effectively downregulate receptor in a non-agonistic manner in multiple cell types. These agents inhibit proliferation and migration and provide a novel potential cancer therapy.
by Benjamin Joseph Hackel.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Spangenberg, Dirk-Mathys. "Time domain ptychography." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96735.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: In this work we investigate a new method to measure the electric field of ultrafast laser pulses by extending a known measurement technique, ptychography, in the spatial domain to the time domain which we call time domain ptychography. The technique requires the measurement of intensity spectra at different time delays of an unknown temporal object and a known probe pulse. We show for the first time by measurement and calculation that this technique can be applied with excellent results to recover both the amplitude and phase of a temporal object. This technique has several advantages, such as fast convergence, the resolution is limited by the usable measured spectral bandwidth and the recovered phase has no sign ambiguity. We then extend the technique to pulse characterization where the probe is derived form the temporal object by filtering meaning the probe pulse is also unknown, but the spectrum of the probe pulse must be the same as the spectrum of the temporal object before filtering. We modify the reconstruction algorithm, now called ptychographic iterative reconstruction algorithm for time domain pulses (PIRANA), in order to also reconstruct the probe and we show for the first time that temporal objects, a.k.a laser pulses, can be reconstructed with this new modality.
AFRIKAANSE OPSOMMING: In hierdie werk het ons ’n nuwe metode ondersoek om die elektriese veld van ’n ultravinnige laser puls te meet deur ’n bekende meettegniek wat gebruik word in die ruimtelike gebied, tigografie, aan te pas vir gebruik in die tyd gebied genaamd tyd gebied tigografie. Die tegniek vereis die meting van ’n reeks intensiteit spektra by verskillende tyd intervalle van ’n onbekende ‘tyd voorwerp’ en ’n bekende monster puls. Ons wys vir die eerste keer deur meting en numeriese berekening dat hierdie tegniek toegepas kan word met uitstekende resultate, om die amplitude en fase van ’n ‘tyd voorwerp’ te meet. Hierdie tegniek het verskeie voordele, die iteratiewe proses is vinnig, die resolusie van die tegniek word bepaal deur die spektrale bandwydte gemeet en die fase van die ‘tyd voorwerp’ word met die korrekte teken gerekonstrueer. Ons het hierdie tegniek uitgebrei na puls karakterisering waar die monster pulse afgelei word, deur ’n bekende filter te gebruik, van die onbekende ‘tyd voorwerp’ nl. die inset puls. Ons het die iteratiewe algoritme wat die ‘tyd voorwerp’ rekonstrueer aangepas om ook die monster puls te vind en ons wys dat ons hierdie metode suksesvol kan gebruik om laser pulse te karakteriseer
APA, Harvard, Vancouver, ISO, and other styles
18

Torp, Kristoffer. "Recursive domain equations." Thesis, Uppsala universitet, Algebra och geometri, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-451850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Jasný, Vojtěch. "Domain-specific languages." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-15428.

Full text
Abstract:
The topic of the thesis are domain-specific languages (DSL) and their use in software development. The target audience are developers interested in learning more about this progressive area of software development. It starts with a necessary theoretical introduction to programming languages. Then, a classification of DSLs is given and software development methodologies based on DSLs are described, notably Language Oriented Programming and Intentional Programming. Another important piece in construction of domain-specific langauges -- the language workbench is also described. In the next chapter, several important tools for DSL creation are presented, described and compared. Each of the tools represents a different possible approach to designing DSLs -- textual, projectional or graphical. The last chapter of the thesis contains a practical example of a DSL implementation in the Meta Programming System by Jet- Brains and Xtext from Eclipse. A domain-specific language for the description of questionnaires is designed from scratch and a code generator for that language is created. A comparison of the DSL based technique to traditional software development techniques is given and the tools used are compared.
APA, Harvard, Vancouver, ISO, and other styles
20

Gomes, Reinaldo Cézar de Morais. "Inter domain negotiation." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/1775.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:52:19Z (GMT). No. of bitstreams: 2 arquivo3230_1.pdf: 3857855 bytes, checksum: 68166824b668991a7746113795017a33 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Universidade Federal de Campina Grande
Nos últimos anos diversas tecnologias foram desenvolvidas com o objetivo de facilitar a interação entre os usuários e seus dispositivos e melhorar a comunicação entre eles, necessitando da interoperabilidade entre essas tecnologias e, consequentemente, a necessidade de uma nova infraestrutura de rede que permita uma melhor adaptação aos novos requisitos criados por esta diversidade de tecnologias. O modelo de comunicação entre redes também está sendo modificado, uma vez que é esperado que elas sejam criadas dinamicamente para facilitar a utilização da rede pelos usuários e permitir que diversas operações sejam realizadas automaticamente (endereçamento, descoberta de serviços, etc.). Essas redes devem estar presentes em diversos cenários de comunicação e um dos seus principais desafios é permitir que diversos tipos de tecnologias cooperem em ambientes com alto dinamismo e heterogeneidade. Estas redes têm como objetivo interconectar diferentes tecnologias e domínios oferecendo uma comunicação que aparente ser homogêneo para os seus usuários. Para a criação dessas futuras redes dinâmicas pontos chaves são a interconexão e a cooperação entre as tecnologias envolvidas, o que exige o desenvolvimento de soluções para garantir que novos requisitos sejam suportados. Para permitir que novos requisitos sejam corretamente suportados, um conjunto de mecanismos para controlar a descoberta automática de recursos e realizar a sua configuração é proposto, permitindo que redes sejam criadas e adaptadas de maneira completamente automática. Também é proposto um mecanismo de negociação de políticas inter-domínio responsável por descobrir e negociar novos recursos que dever ser usados pelas redes, o que traz um novo modelo de comunicação baseado na criação oportunista de redes e ao mesmo tempo permite a criação de novos acordos de comunicação entre domínios administrativos de maneira dinâmica e sem a intervenção dos usuários ou dos administradores das redes
APA, Harvard, Vancouver, ISO, and other styles
21

Massaoudi, Imane. "Domain Decomposition Approach for Deterministic/Stochastic EMC Time-Domain Numerical and Experimental Applications. Alleviating the Curse of Dimensionality." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2023. http://www.theses.fr/2023UCFA0150.

Full text
Abstract:
Cette thèse introduit une nouvelle méthode de décomposition de domaine (DD) pour résoudre des problèmes électromagnétiques stochastiques linéaires dans le domaine temporel. Les approches de décomposition temporelles sont déjà largement utilisées pour gérer la complexité des modèles en effectuant des calculs à un niveau local, mais elles nécessitent souvent l'échange d'informations et de résultats de simulation pour chaqueitération temporelle. La technique proposée consiste à diviser un système linéaire global en sous-systèmes qui ne se chevauchent pas via une ou plusieurs interfaces d'échange ponctuelles. Elle est basée sur l'évaluation des réponses impulsionnelles de chaque sous-système indépendamment (solutions partielles) et sur leur combinaison linéaire par le biais de produits de convolution. Comme aucune information sensible ou propriétairede chaque sous-système n'est requise pour l'échange, la confidentialité des modèles est préservée. La méthode a été extensivement appliquée à plusieurs configurations de réseaux de lignes de transmission sur la base de simulations numériques et de set-ups expérimentaux afin d'évaluer ses performances et ses limites. Cette validation complète a démontré l'efficacité de la méthode et son potentiel pour des problèmes CEM linéairesplus complexes. Cependant, un autre niveau de complexité, traduit par la dimension d'incertitude, s'ajoute aux problèmes du monde réel. Bien que l'efficacité de la technique DD soit démontrée pour l'analyse stochastique en propageant l'incertitude dans les sous-modèles, le coût de calcul croît de manière exponentielle avec l'augmentation du nombre de variables aléatoires dans le système. Pour relever ce défi, connu sous le nom demalédiction de la dimensionnalité, la méthode de collocation stochastique a été associée à l'approche de décomposition de domaine, basée sur une stratégie hors ligne et en ligne motivée par la nature asynchrone de la technique DD permettant la séparation des variables aléatoires. Les validations numériques obtenues pour des applications de réseaux de lignes de transmission soulignent l'intérêt de cette approche originale avec laréduction spectaculaire du coût d'évaluation du modèle
This thesis introduces a novel domain decomposition (DD) method to solve linear stochastic electromagnetic problems in time-domain. Temporal decomposition approaches are already widely used to manage models' complexity by performing computations at a local level, however, they often require the exchange of information and simulation results for each time iteration. The proposed technique consists of splitting a global linear systeminto non-overlapping sub-systems via one or several one-point exchange interfaces. It is based on the evaluation of impulse responses of each sub-system independently (partial solutions) and their linear combination through convolution products. As no sensitive or proprietary information of each sub-system is required for exchange, the confidentiality of the models is preserved. The method was extensively applied for several configurations oftransmission line networks based on computational simulations and experimental set-ups to assess its performance and limitations. This comprehensive validation demonstrated the method's efficiency and potential for more complex linear EMC problems. However, another level of complexity, translated by the uncertainty dimension, adds to real-world problems. Although the efficiency of the DD technique is demonstrated forstochastic analysis by propagating the uncertainty in the sub-models, the computational cost grows exponentially with the increasing number of random variables in the system. To tackle this challenge, known as the curse of dimensionality, the stochastic collocation method was associated with the domain decomposition approach, based on an offline-online strategy motivated by the asynchronous nature of the DD technique allowing random variable separation. Numerical validations obtained for transmission line networkapplications highlight the interest of this original approach with the dramatic reduction of the evaluation cost of the model
APA, Harvard, Vancouver, ISO, and other styles
22

Rüßmann, Florian. "The eukaryotic chaperonin TRiC domain-wise folding of multi-domain proteins." Diss., Ludwig-Maximilians-Universität München, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-157246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Parimi, Rohit. "Collaborative filtering approaches for single-domain and cross-domain recommender systems." Diss., Kansas State University, 2015. http://hdl.handle.net/2097/20108.

Full text
Abstract:
Doctor of Philosophy
Computing and Information Sciences
Doina Caragea
Increasing amounts of content on the Web means that users can select from a wide variety of items (i.e., items that concur with their tastes and requirements). The generation of personalized item suggestions to users has become a crucial functionality for many web applications as users benefit from being shown only items of potential interest to them. One popular solution to creating personalized item suggestions to users is recommender systems. Recommender systems can address the item recommendation task by utilizing past user preferences for items captured as either explicit or implicit user feedback. Numerous collaborative filtering (CF) approaches have been proposed in the literature to address the recommendation problem in the single-domain setting (user preferences from only one domain are used to recommend items). However, increasingly large datasets often prevent experimentation of every approach in order to choose the one that best fits an application domain. The work in this dissertation on the single-domain setting studies two CF algorithms, Adsorption and Matrix Factorization (MF), considered to be state-of-the-art approaches for implicit feedback and suggests that characteristics of a domain (e.g., close connections versus loose connections among users) or characteristics of data available (e.g., density of the feedback matrix) can be useful in selecting the most suitable CF approach to use for a particular recommendation problem. Furthermore, for Adsorption, a neighborhood-based approach, this work studies several ways to construct user neighborhoods based on similarity functions and on community detection approaches, and suggests that domain and data characteristics can also be useful in selecting the neighborhood approach to use for Adsorption. Finally, motivated by the need to decrease computational costs of recommendation algorithms, this work studies the effectiveness of using short-user histories and suggests that short-user histories can successfully replace long-user histories for recommendation tasks. Although most approaches for recommender systems use user preferences from only one domain, in many applications, user interests span items of various types (e.g., artists and tags). Each recommendation problem (e.g., recommending artists to users or recommending tags to users) can be considered unique domains, and user preferences from several domains can be used to improve accuracy in one domain, an area of research known as cross-domain recommender systems. The work in this dissertation on cross-domain recommender systems investigates several limitations of existing approaches and proposes three novel approaches (two Adsorption-based and one MF-based) to improve recommendation accuracy in one domain by leveraging knowledge from multiple domains with implicit feedback. The first approach performs aggregation of neighborhoods (WAN) from the source and target domains, and the neighborhoods are used with Adsorption to recommend target items. The second approach performs aggregation of target recommendations (WAR) from Adsorption computed using neighborhoods from the source and target domains. The third approach integrates latent user factors from source domains into the target through a regularized latent factor model (CIMF). Experimental results on six target recommendation tasks from two real-world applications suggest that the proposed approaches effectively improve target recommendation accuracy as compared to single-domain CF approaches and successfully utilize varying amounts of user overlap between source and target domains. Furthermore, under the assumption that tuning may not be possible for large recommendation problems, this work proposes an approach to calculate knowledge aggregation weights based on network alignment for WAN and WAR approaches, and results show the usefulness of the proposed solution. The results also suggest that the WAN and WAR approaches effectively address the cold-start user problem in the target domain.
APA, Harvard, Vancouver, ISO, and other styles
24

Mbacke, Sokhna Diarra. "Completeness for domain semirings and star-continuous Kleene algebras with domain." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/33008.

Full text
Abstract:
Due to their increasing complexity, today’s computer systems are studied using multiple models and formalisms. Thus, it is necessary to develop theories that unify different approaches in order to limit the risks of errors when moving from one formalism to another. It is in this context that monoids, semirings and Kleene algebras with domain were born about a decade ago. The idea is to define a domain operator on classical algebraic structures, in order to unify algebra and the classical logics of programs. The question of completeness for these algebras is still open. It constitutes the object of this thesis. We define tree structures called trees with a top and represented in matrix form. After having given fundamental properties of these trees, we define relations that make it possible to compare them. Then, we show that, modulo a certain equivalence relation, the set of trees with a top is provided with a monoid with domain structure. This result makes it possible to define a model for semirings with domain and prove its completeness. We also define a model for -continuous Kleene algebras with domain as well and prove its completeness modulo a new axiom.
APA, Harvard, Vancouver, ISO, and other styles
25

Robinson, Christian L. "Domain by domain analysis of the RNA binding properties of LysRS." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1282930874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tan, Philip. "Engineered antibodies : folding stability, domain-domain assembly, refolding efficiency and solubility /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Dong, Pei. "Pixel domain and compressed domain video analysis for smart information extraction." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11952.

Full text
Abstract:
Assisted consumption and manipulation of the rapidly expanding digital video archives has become one of the crucial topics in video analysis. This is mainly due to the inefficiency of traditional manual browsing, especially in face of the explosive amount of information. Therefore, extracting the salient information from videos in an automatic and smart way is a promising endeavour. In different scenarios, information extraction for videos can be defined in diversified ways. For the professionally edited genres, such as documentaries, movies and TV news, a condensed version with extracted parts from the original sequence provides viewers with a useful surrogate video that is much shorter in duration and more intensive in content. As to the surveillance videos, foreground object extraction provides a focus of attention on potential objects-of-interest and could relieve human observers from going through the entire lengthy surveillance footages. Different from the approaches that are based on the raw pixels, another stream of methods, namely the compressed domain methods, directly analyse the information obtained during video decompression and thus a considerable amount of computational cost of reconstructing the pixels could be waived. This makes the compressed domain video processing not only advantageous in terms of efficiency, but also attractive due to the availability of meaningful information generated by the video encoder. In this thesis, both the pixel domain and compressed domain methods are proposed for smart video information extraction, including an iteratively reweighting algorithm for salient video segment extraction, a real-time algorithm for keyframe extraction from H.264/AVC (advanced video coding) videos, a foreground extraction algorithm via undercomplete dictionary learning-based background representation, and an efficient algorithm for extracting the moving objects and trajectories from H.264/AVC compressed sequences.
APA, Harvard, Vancouver, ISO, and other styles
28

Alsanabani, Mohamed Moslih. "Soil water determination by time domain reflectometry: Sampling domain and geometry." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185550.

Full text
Abstract:
This work investigates several aspects of time domain reflectometry (TDR) theory and application. One of these aspects is the study of the influence of TDR probe geometries on the travel time. No change in the travel time resulted from increasing either the diameter of wire or spacing. However, we found a linear relationship between the travel time and the length of the probe for measurements in water. Also we found the reflected voltage was inversely proportionally to the incident voltage in water. Another aspect is the volume of sensitivity for the TDR which depends on the electrical properties of the medium and the geometry of the probe. The sensitivity of TDR in soil is different than in water. The observations in soils indicate that soil with a high water content (θᵥ) has a smaller sample volume than the one with low θᵥ. A probe with a large wire diameter has a larger sample volume than a probe with a small wire diameter. Also, a simple model and a mixing model were investigated and compared to Topp's model, for relating θᵥ to the effective dielectric constant. The distance to wetting front over time was observed and calculated using an expression which relates the travel time in soil before and after water application. This was tested with probes of different geometries. The wetting front from a point source were monitored for two and three dimensions in a plexiglas tank using TDR. Contour maps for the calculated radius of wetting front vs. the depth over time were produced.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhou, Tingdong. "Electromagnetic system frequency-domain reduced-order modeling and time-domain simulation." Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/279965.

Full text
Abstract:
Model order reduction methodologies are presented for semi-discrete electromagnetic systems obtained from the spatial discretization of the hyperbolic system of Maxwell's equations. Different reduced-order modeling algorithms, i.e., Pade via Lanczos (PVL), multiple point PVL, Krylov, rational Krylov, PVL with expansion at infinity, are presented and applied for model order reduction and the properties of these algorithms are discussed. The implementation of the model order reduction methodologies to a full-wave frequency domain electromagnetic system simulator (ROMES) is discussed in detail. Scattering parameters are calculated for several electromagnetic systems with discontinuities. A time domain simulation framework is also introduced for transmission line embedded systems described by the Telegrapher's equations. The time domain convolution approach is selected to perform the transmission line embedded circuit simulations. Derivations for Closed-form triangle impulse responses (TIR) are discussed and numerical examples are presented. The developed triangle impulse responses are used to perform time-domain circuit simulations. The effects of frequency-dependent lossy transmission lines on signal integrity and causality issues associated with the transmission line parameters ( R, L, C, and G) in Telegrapher's equation are discussed. The presented research provides an accurate and efficient way to characterize electromagnetic systems for high-speed circuit applications in the frequency domain and methods to simulate these circuits in the time domain.
APA, Harvard, Vancouver, ISO, and other styles
30

Wei, Heng. "Split PH domain identification & redundancy analyses in the classification of PDZ domains /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?BICH%202006%20WEI.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wuth, Clemens [Verfasser]. "Stochastic and coherent dynamics of individual magnetic domains and domain walls / Clemens Wuth." München : Verlag Dr. Hut, 2015. http://d-nb.info/1079768815/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Masson, Romain. "La valorisation des biens publics." Thesis, Paris 10, 2018. http://www.theses.fr/2018PA100094.

Full text
Abstract:
La présente recherche vise à cerner et définir le concept de valorisation appliqué aux biens publics en s’appuyant sur son double fondement, le droit de propriété et le bon usage des deniers publics. Ce concept repose sur deux composantes, l’exploitation et la cession, qui permettent de mettre en lumière les multiples formes de la valorisation : économique, sociale, environnementale. Ces manifestations de la valorisation renouvellent l’analyse afin de mieux comprendre l’enjeu de la réforme du droit des biens publics, la manière dont la valorisation a influencé ce droit et les évolutions à venir. Ainsi, le rapprochement des régimes domaniaux a permis d’assouplir et de moderniser les outils de valorisation et les principes juridiques régissant le domaine public. Ce rapprochement devrait aboutir à une unification de la compétence juridictionnelle au profit du juge administratif. Par ailleurs, sous l’impulsion de la valorisation, de nouvelles obligations s’imposent aux propriétaires publics : mise en concurrence des occupations domaniales, inventaire des biens, valorisation d’avenir
This research aims to identify and define the concept of valorization applied to public properties based on its double foundation, the right to property and the proper use of public funds. This concept is based on two components, exploitation and disposal, which highlight the multiple forms of valorization : economic, social, environmental. These valorisation events renew the analysis in order to better understand the stake of the reform of the law of the public properties, the way in which the valorization has influenced this right and the evolutions to come. Thus, the approximation of state regimes has made it possible to soften and modernize valorization tools and the legal principles governing the public domain. This rapprochement should lead to a unification of jurisdiction for the benefit of the administrative judge. In addition, under the impetus of the valorization, new obligations are imposed on the public owners : competition of the public occupations, inventory of the properties, valorization of the future
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Ying-Hui. "Molecular interaction of zinc finger domain : study of androgen receptor DNA binding domain and SCA7 domain of Ataxin7 by NMR." Strasbourg, 2010. http://www.theses.fr/2010STRA6018.

Full text
Abstract:
La voie de signalisation du récepteur des androgènes (AR) est impliquée dans la progression du cancer de la prostate, et il a été montré que des mutations dans ce domaine étaient responsables de l'activation constitutive des gènes placés sous le contrôle des hormones androgènes. Une de ces mutations transforme un résidu thréonine du DBD en alanine (T575A). Des expériences permettant de mesurer l'activité de transcription ont permis à l'équipe du Dr. Ceraline à l'IRCAD de montrer que la mutation T575A induit un changement de spécificité du récepteur. Alors que l'activité de promoteurs placés sous le contrôle d'éléments de réponse spécifique de AR diminue, celle des promoteurs placés sous le contrôle d'éléments non spécifique augmente. Ce changement de spécificité est corrélé à une modification de l'affinité du récepteur pour les éléments de réponse spécifiques et non spécifiques. Afin de comprendre le mécanisme de cette "reprogrammation" à l'échelle moléculaire, l'étude structurale des domaines DBD des récepteurs sauvage et muté a été entreprise par RMN. La comparaison des deux structures en solution a montré que la mutation n'altère pas le repliement du domaine et donc que la différence de reconnaissance des éléments de réponse n'est pas liée directement à la structure tridimensionnelle du domaine. Nous avons ensuite cherché à déterminer si l'altération de la fonction n'était pas due à une différence de dynamique de la chaîne peptidique. Afin d'étudier les mouvements moléculaires le long de la chaîne, des mesures de relaxation hétéronucléaire ont été effectuées et ont montré également une grande similarité dans le comportement dynamique des deux domaines, à l'exception d'une région située dans le premier doigt de zinc à proximité d'une histidine (H570), qui est conservée dans l'ensemble de la famille des domaines DBD des récepteurs nucléaires. Cette différence nous a conduit à mesurer, par RMN, le pKa de cette histidine pour les deux protéines. Nous avons ainsi montré que la mutation T575A induit une diminution de 0,5 unité de pH par rapport à la même histidine dans le domaine sauvage. L'analyse de la structure a permis de montrer que cette différence de pKa est liée à la perte d'une interaction entre le groupe hydroxyle de la thréonine 575 et le cycle imidazole de l'histidine. L'effet de la mutation sur le mécanisme de reconnaissance s'explique donc par un effet indirect dans lequel un acide aminé situé à distance de la région d'interaction modifie la surface électrostatique du domaine DBD. L'effet de la charge positive en position 570 sur la spécificité de reconnaissance de l'élément de réponse a ensuite été étudiée en construisant plusieurs mutants portant ou non une charge à cette position (mutants H570R et H570A). Ces études ont permis de confirmer l'importance de cette charge et l'ensemble de nos travaux fournissent un éclairage inédit sur les mécanismes de reconnaissance de l'ADN par les récepteurs nucléaires. .
The androgen receptor (AR) is a ligand-activated transcriptional factor and a member of the nuclear receptor super family. AR shares a common structural and functional architecture with other members of nuclear receptors. The DNA binding domain of AR (ARDBD) binds to specific response elements as a homodimer. In the clinic, certain mutations in AR are associated with the progression of prostate cancer and have consequences for the treatment of patients with advanced prostate cancer. Previous studies showed that the mutation T575A, locating in the DNA binding domain, enhances the transcriptional activity regulated by full-length AR on promoters containing the non-specific response element compared to the wild type domain does not. These differences prompted us to study the molecular mechanism of ARDBD wild type and the T575A mutant. Structures of ARDBD wild type and T575A mutant revealed high similarity. However, dynamic behavior showed distinct differences between wild type and T575A mutant domains. The protonation state of H570 in ARDBD was found to be differed by the mutation. This loss of charge of H570 results in changes in transcriptional activity of AR. .
APA, Harvard, Vancouver, ISO, and other styles
34

Curtis, Ryan. "Theory of current-driven domain wall motion in artificial magnetic domain structures." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.665451.

Full text
Abstract:
This thesis concerns the combination of two overlapping fields in physics: condensed matter and electromagnetism. Specifically the problem posed of simulating the movement of magnetic domains by applied magnetic, and electric, fields. In this investigation electronic structure methods are used in an attempt to parametrise longer length-scale micromagnetic simulations. Previous works in the field have relied upon suitable experiments having been conducted, whereas this work can stand alone - albeit with its own propagation of systematic errors. Modelling is undertaken to predict the applicability of cobalt platinum multilayers as a new type of computer memory. Although results are promising, features not in the remit of this thesis, such as practicality, are noted to be major obstacles that would need to be overcome. Ab initio methods are used with varying success to predict the saturation magnetisation, Gilbert damping parameter, and anisotropy parameter of cobalt platinum systems.
APA, Harvard, Vancouver, ISO, and other styles
35

James, Phillip. "Designing domain specific languages for verification and applications to the railway domain." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa42823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Varga, Andrea. "Exploiting domain knowledge for cross-domain text classification in heterogeneous data sources." Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/7538/.

Full text
Abstract:
With the growing amount of data generated in large heterogeneous repositories (such as the Word Wide Web, corporate repositories, citation databases), there is an increased need for the end users to locate relevant information efficiently. Text Classification (TC) techniques provide automated means for classifying fragments of text (phrases, paragraphs or documents) into predefined semantic types, allowing an efficient way for organising and analysing such large document collections. Current approaches to TC rely on supervised learning, which perform well on the domains on which the TC system is built, but tend to adapt poorly to different domains. This thesis presents a body of work for exploring adaptive TC techniques across hetero- geneous corpora in large repositories with the goal of finding novel ways of bridging the gap across domains. The proposed approaches rely on the exploitation of domain knowledge for the derivation of stable cross-domain features. This thesis also investigates novel ways of estimating the performance of a TC classifier, by means of domain similarity measures. For this purpose, two novel knowledge-based similarity measures are proposed that capture the usefulness of the selected cross-domain features for cross-domain TC. The evaluation of these approaches and measures is presented on real world datasets against various strong baseline methods and content-based measures used in transfer learning. This thesis explores how domain knowledge can be used to enhance the representation of documents to address the lexical gap across the domains. Given that the effectiveness of a text classifier largely depends on the availability of annotated data, this thesis explores techniques which can leverage data from social knowledge sources (such as DBpedia and Freebase). Techniques are further presented, which explore the feasibility of exploiting different semantic graph structures from knowledge sources in order to create novel cross- domain features and domain similarity metrics. The methodologies presented provide a novel representation of documents, and exploit four wide coverage knowledge sources: DBpedia, Freebase, SNOMED-CT and MeSH. The contribution of this thesis demonstrates the feasibility of exploiting domain knowl- edge for adaptive TC and domain similarity, providing an enhanced representation of docu- ments with semantic information about entities, that can indeed reduce the lexical differences between domains.
APA, Harvard, Vancouver, ISO, and other styles
37

Yang, Baoyao. "Distribution alignment for unsupervised domain adaptation: cross-domain feature learning and synthesis." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/556.

Full text
Abstract:
In recent years, many machine learning algorithms have been developed and widely applied in various applications. However, most of them have considered the data distributions of the training and test datasets to be similar. This thesis concerns on the decrease of generalization ability in a test dataset when the data distribution is different from that of the training dataset. As labels may be unavailable in the test dataset in practical applications, we follow the effective approach of unsupervised domain adaptation and propose distribution alignment methods to improve the generalization ability of models learned from the training dataset in the test dataset. To solve the problem of joint distribution alignment without target labels, we propose a new criterion of domain-shared group sparsity that is an equivalent condition for equal conditional distribution. A domain-shared group-sparse dictionary learning model is built with the proposed criterion, and a cross-domain label propagation method is developed to learn a target-domain classifier using the domain-shared group-sparse representations and the target-specific information from the target data. Experimental results show that the proposed method achieves good performance on cross-domain face and object recognition. Moreover, most distribution alignment methods have not considered the difference in distribution structures, which results in insufficient alignment across domains. Therefore, a novel graph alignment method is proposed, which aligns both data representations and distribution structural information across the source and target domains. An adversarial network is developed for graph alignment by mapping both source and target data to a feature space where the data are distributed with unified structure criteria. Promising results have been obtained in the experiments on cross-dataset digit and object recognition. Problem of dataset bias also exists in human pose estimation across datasets with different image qualities. Thus, this thesis proposes to synthesize target body parts for cross-domain distribution alignment, to address the problem of cross-quality pose estimation. A translative dictionary is learned to associate the source and target domains, and a cross-quality adaptation model is developed to refine the source pose estimator using the synthesized target body parts. We perform cross-quality experiments on three datasets with different image quality using two state-of-the-art pose estimators, and compare the proposed method with five unsupervised domain adaptation methods. Our experimental results show that the proposed method outperforms not only the source pose estimators, but also other unsupervised domain adaptation methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Domeniconi, Giacomo <1986&gt. "Data and Text Mining Techniques for In-Domain and Cross-Domain Applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7494/1/domeniconi_giacomo_tesi.pdf.

Full text
Abstract:
In the big data era, a wide amount of data has been generated in different domains, from social media to news feeds, from health care to genomic functionalities. When addressing a problem, we usually need to harness multiple disparate datasets. Data from different domains may follow different modalities, each of which has a different representation, distribution, scale and density. For example, text is usually represented as discrete sparse word count vectors, whereas an image is represented by pixel intensities, and so on. Nowadays plenty of Data Mining and Machine Learning techniques are proposed in literature, which have already achieved significant success in many knowledge engineering areas, including classification, regression and clustering. Anyway some challenging issues remain when tackling a new problem: how to represent the problem? What approach is better to use among the huge quantity of possibilities? What is the information to be used in the Machine Learning task and how to represent it? There exist any different domains from which borrow knowledge? This dissertation proposes some possible representation approaches for problems in different domains, from text mining to genomic analysis. In particular, one of the major contributions is a different way to represent a classical classification problem: instead of using an instance related to each object (a document, or a gene, or a social post, etc.) to be classified, it is proposed to use a pair of objects or a pair object-class, using the relationship between them as label. The application of this approach is tested on both flat and hierarchical text categorization datasets, where it potentially allows the efficient addition of new categories during classification. Furthermore, the same idea is used to extract conversational threads from an unregulated pool of messages and also to classify the biomedical literature based on the genomic features treated.
APA, Harvard, Vancouver, ISO, and other styles
39

Domeniconi, Giacomo <1986&gt. "Data and Text Mining Techniques for In-Domain and Cross-Domain Applications." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amsdottorato.unibo.it/7494/.

Full text
Abstract:
In the big data era, a wide amount of data has been generated in different domains, from social media to news feeds, from health care to genomic functionalities. When addressing a problem, we usually need to harness multiple disparate datasets. Data from different domains may follow different modalities, each of which has a different representation, distribution, scale and density. For example, text is usually represented as discrete sparse word count vectors, whereas an image is represented by pixel intensities, and so on. Nowadays plenty of Data Mining and Machine Learning techniques are proposed in literature, which have already achieved significant success in many knowledge engineering areas, including classification, regression and clustering. Anyway some challenging issues remain when tackling a new problem: how to represent the problem? What approach is better to use among the huge quantity of possibilities? What is the information to be used in the Machine Learning task and how to represent it? There exist any different domains from which borrow knowledge? This dissertation proposes some possible representation approaches for problems in different domains, from text mining to genomic analysis. In particular, one of the major contributions is a different way to represent a classical classification problem: instead of using an instance related to each object (a document, or a gene, or a social post, etc.) to be classified, it is proposed to use a pair of objects or a pair object-class, using the relationship between them as label. The application of this approach is tested on both flat and hierarchical text categorization datasets, where it potentially allows the efficient addition of new categories during classification. Furthermore, the same idea is used to extract conversational threads from an unregulated pool of messages and also to classify the biomedical literature based on the genomic features treated.
APA, Harvard, Vancouver, ISO, and other styles
40

Johnson, Richard. "Frequency domain structural identification." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA312408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Watts, Robert B. "Implementing maritime domain awareness." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Mar%5FWatts.pdf.

Full text
Abstract:
Thesis (M.A. in Security Studies (Homeland Security and Defense))--Naval Postgraduate School, March 2006.
Thesis Advisor(s): Jeffrey Kline. "March 2006." Includes bibliographical references (p. 61-66). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
42

Canalias, Carlota. "Domain engineering in KTiOPO4." Doctoral thesis, KTH, Laserfysik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-464.

Full text
Abstract:
Ferroelectric crystals are commonly used in nonlinear optics for frequency conversion of laser radiation. The quasi-phase matching (QPM) approach uses a periodically modulated nonlinearity that can be achieved by periodically inverting domains in ferroelectric crystals and allows versatile and efficient frequency conversion in the whole transparency region of the material. KTiOPO4 (KTP) is one of the most attractive ferroelectric non-linear optical material for periodic domain-inversion engineering due to its excellent non-linearity, high resistance for photorefractive damage, and its relatively low coercive field. A periodic structure of reversed domains can be created in the crystal by lithographic patterning with subsequent electric field poling. The performance of the periodically poled KTP crystals (PPKTP) as frequency converters rely directly upon the poling quality. Therefore, characterization methods that lead to a deeper understanding of the polarization switching process are of utmost importance. In this work, several techniques have been used and developed to study domain structure in KTP, both in-situ and ex-situ. The results obtained have been utilized to characterize different aspects of the polarization switching processes in KTP, both for patterned and unpatterned samples. It has also been demonstrated that it is possible to fabricate sub-micrometer (sub-μm) PPKTP for novel optical devices. Lithographic processes based on e-beam lithography and deep UV-laser lithography have been developed and proven useful to pattern sub- μm pitches, where the later has been the most convenient method. A poling method based on a periodical modulation of the K-stoichiometry has been developed, and it has resulted in a sub-μm domain grating with a period of 720 nm for a 1 mm thick KTP crystal. To the best of our knowledge, this is the largest domain aspect-ratio achieved for a bulk ferroelectric crystal. The sub-micrometer PPKTP samples have been used for demonstration of 6:th and 7:th QPM order backward second-harmonic generation with continuous wave laser excitation, as well as a demonstration of narrow wavelength electrically-adjustable Bragg reflectivity.
QC 20100930
APA, Harvard, Vancouver, ISO, and other styles
43

Skjeie, Hans Christian Bakken. "Terahertz Time-Domain Spectroscopy." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19214.

Full text
Abstract:
The field of terahertz time-domain spectroscopy (THz-TDS) is still far from reaching its full potential, but is a very promising utility for a wide range of applications. Principle experiments have been performed in fields of drug screening, pharmaceutical, medical diagnostics, security imaging and detection of explosives. Optimized and adapted THz-TDS systems holds great promise for driving this technology further.The purpose of this thesis was to build a THz-TDS system, explore possibilities for improving this system and to perform THz-TDS measurements on semiconductors and wood. The aim of the experimental work was to build a stable and reliable system with an electric field strength of THz radiation in the order of kV/cm. The THz-TDS system used in this thesis was based upon the principles of optical rectification and free-space electro-optic sampling in zinc telluride (ZnTe) crystals using a femtosecond Ti:Sapphire amplified laser.Theoretical studies were performed on the principles of generation and detection of THz radiation. The experimental work was based on publications of similar experiments. Theoretical and experimental studies lead to several modifications and improvements of the setup first built in this thesis. Experiments were performed on disparate materials to find suitable materials for THz transmission. Results from measurements performed on semiconductors and wood, obtained by THz-TDS, were analysed to find the absorption coefficient and the refractive index of the materials. The spectroscopic information obtained by THz-TDS can also be used to find the conductivity and the mobility of these materials. THz-TDS measures the electric field and therefore provides information of both the amplitude and the phase of the THz wave. A Fourier transformation was used to obtain the frequency spectrum of the detected signal. The improvements were done by analysing the results of the detected signal to see which adjustments and modifications to the setup that had positive effects on the results. The pump power used for generation of THz radiation and the optimum azimuthal angle of the ZnTe crystals were crucial to obtain a THz-TDS system with a strong electric field. The maximum electric field strength for the THz radiation in this thesis was 13.2 kV/cm, with a signal-to-noise ratio of 43 and dynamic range of 1500.
APA, Harvard, Vancouver, ISO, and other styles
44

Davies, Brian E., Graham M. L. Gladwell, Josef Leydold, and Peter F. Stadler. "Discrete Nodal Domain Theorems." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/976/1/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Canalias, Carlota. "Domain engineering in KTiOPO₄ /." Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Xiaofeng. "Auditory domain speech enhancement." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Pagin, Peter. "Vagueness and Domain Restriction." Stockholms universitet, Filosofiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-68416.

Full text
Abstract:
This paper develops an idea of saving ordinary uses of vague predicates from the Sorites by means of domain restriction. A tolerance level for a pred- icate, along a dimension, is a difference with respect to which the predicate is semantically insensitive. A central gap for the predicate+dimension in a domain is a segment of an associated scale, larger than this difference, where no object in the domain has a measure, and such that the extension of the predicate has measures on one side of the gap and the anti-extension on the other. The domain restriction imposes a central gap.

Author count: 1;


Vagueness and Context Factors
APA, Harvard, Vancouver, ISO, and other styles
48

Eskiyerli, Mirat Hayri. "Square root domain filters." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Si, Si, and 斯思. "Cross-domain subspace learning." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44912912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Davies, Brian E., Josef Leydold, and Peter F. Stadler. "Discrete Nodal Domain Theorems." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2000. http://epub.wu.ac.at/1674/1/document.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography