Thèses sur le sujet « Heterogenous domain »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Heterogenous domain.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Heterogenous domain ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

THIEU, THI KIM THOA. « Models for coupled active--passive population dynamics : mathematical analysis and simulation ». Doctoral thesis, Gran Sasso Science Institute, 2020. http://hdl.handle.net/20.500.12571/15016.

Texte intégral
Résumé :
In this dissertation, we study models for coupled active--passive pedestrian dynamics from mathematical analysis and simulation perspectives. The general aim is to contribute to a better understanding of complex pedestrian flows. This work comes in three main parts, in which we adopt distinct perspectives and conceptually different tools from lattice gas models, partial differential equations, and stochastic differential equations, respectively. In part one, we introduce two lattice models for active--passive pedestrian dynamics. In a first model, using descriptions based on the simple exclusion process, we study the dynamics of pedestrian escape from an obscure room in a lattice domain with two species of particles (pedestrians). The main observable is the evacuation time as a function of the parameters caracterizing the motion of the active pedestrians. Our Monte Carlo simulation results show that the presence of the active pedestrians can favor the evacuation of the passive ones. We interpret this phenomenon as a discrete space counterpart of the so-called drafting effect. In a second model, we consider again a microscopic approach based on a modification of the simple exclusion process formulated for active--passive populations of interacting pedestrians. The model describes a scenario where pedestrians are walking in a built environment and enter a room from two opposite sides. For such counterflow situation, we have found out that the motion of active particles improves the outgoing current of the passive particles. In part two, we study a fluid-like driven system modeling active--passive pedestrian dynamics in a heterogenous domain. We prove the well-posedness of a nonlinear coupled parabolic system that models the evolution of the complex pedestrian flow by using special energy estimates, a Schauder's fixed point argument and the properties of the nonlinearity's structure. In the third part, we describe via a coupled nonlinear system of Skorohod-like stochastic differential equations the dynamics of active--passive pedestrians dynamics through a heterogenous domain in the presence of fire and smoke. We prove the existence and uniqueness of strong solutions to our model when reflecting boundary conditions are imposed on the boundaries. To achieve this we used compactness methods and the Skorohod's representation of solutions to SDEs posed in bounded domains. Furthermore, we study an homogenization setting for a toy model (a semi-linear elliptic equation) where later on our pedestrian models can be studied.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Varga, Andrea. « Exploiting domain knowledge for cross-domain text classification in heterogeneous data sources ». Thesis, University of Sheffield, 2014. http://etheses.whiterose.ac.uk/7538/.

Texte intégral
Résumé :
With the growing amount of data generated in large heterogeneous repositories (such as the Word Wide Web, corporate repositories, citation databases), there is an increased need for the end users to locate relevant information efficiently. Text Classification (TC) techniques provide automated means for classifying fragments of text (phrases, paragraphs or documents) into predefined semantic types, allowing an efficient way for organising and analysing such large document collections. Current approaches to TC rely on supervised learning, which perform well on the domains on which the TC system is built, but tend to adapt poorly to different domains. This thesis presents a body of work for exploring adaptive TC techniques across hetero- geneous corpora in large repositories with the goal of finding novel ways of bridging the gap across domains. The proposed approaches rely on the exploitation of domain knowledge for the derivation of stable cross-domain features. This thesis also investigates novel ways of estimating the performance of a TC classifier, by means of domain similarity measures. For this purpose, two novel knowledge-based similarity measures are proposed that capture the usefulness of the selected cross-domain features for cross-domain TC. The evaluation of these approaches and measures is presented on real world datasets against various strong baseline methods and content-based measures used in transfer learning. This thesis explores how domain knowledge can be used to enhance the representation of documents to address the lexical gap across the domains. Given that the effectiveness of a text classifier largely depends on the availability of annotated data, this thesis explores techniques which can leverage data from social knowledge sources (such as DBpedia and Freebase). Techniques are further presented, which explore the feasibility of exploiting different semantic graph structures from knowledge sources in order to create novel cross- domain features and domain similarity metrics. The methodologies presented provide a novel representation of documents, and exploit four wide coverage knowledge sources: DBpedia, Freebase, SNOMED-CT and MeSH. The contribution of this thesis demonstrates the feasibility of exploiting domain knowl- edge for adaptive TC and domain similarity, providing an enhanced representation of docu- ments with semantic information about entities, that can indeed reduce the lexical differences between domains.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Porta, Paulo Fidel. « Heterogeneous domain decomposition methods for coupled flow problems ». [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=974365610.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xu, Jian. « Supporting domain heterogeneous data sources for semantic integration ». Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/36583.

Texte intégral
Résumé :
A SEMantic Integration System (SemIS) allows a query over one database to be answered using the knowledge managed in multiple databases in the system. It does so by translating a query across the collaborative databases in which data is autonomously managed in heterogeneous schemas. In this thesis, we investigate the challenges that arise in enabling domain heterogeneous (DH) databases to collaborate in a SemIS. In such a setting, distributed databases modeled as independent data sources are pairwise mapped to form the semantic overlay network (SON) of the SemIS. We study two problems we believe are foremost to allow a SemIS to integrate DH data sources. The first problem tackled in this thesis is to efficiently organize data sources so that query answering is efficient despite the increased level of source heterogeneity. This problem is modeled as an “Acquaintance Selection” problem and our solution helps data sources to choose appropriate acquaintances to create schema mappings with and therefore allows a SemIS to have a single-layered and flexible SON. The second problem tackled in this thesis is to allow aggregate queries to be translated across domain heterogeneous (DH) data sources where objects are usually represented and managed at different granularity. We focus our study on relational databases and propose novel techniques that allow a (non-aggregate) query to be answered by aggregations over objects at a finer granularity. The new query answering framework, named “decomposition aggregation query (DAQ)” processing, integrates data sources holding information in different domains and different granularity. New challenges are identified and tackled in a systematic way. We studied query optimizations for DAQ to provide efficient and scalable query processing. The solutions for both problems are evaluated empirically using real-life data and synthetic data sets. The empirical studies verified our theoretical claims and showed the feasibility, applicability (for real-life applications) and scalability of the techniques and solutions.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Achichi, Manel. « Linking heterogeneous open data : application to the musical domain ». Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS002/document.

Texte intégral
Résumé :
Des milliers d'œuvres musicales sont décrites dans des catalogues des institutions culturelles, dont le rôle est de stocker toutes les créations musicales à travers le catalogage et de les diffuser auprès du grand public. Cette thèse s’inscrit dans le cadre du projet ANR DOREMUS -DOnnées en REutilisation pour la Musique en fonction des USages- qui vise à explorer les métadonnées des catalogues de trois grandes institutions culturelles : Bibliothèque Nationale de France (BNF), Philharmonie de Paris et Radio France afin qu'elles puissent communiquer entre elles et être mieux utilisées par les différents publics. Dans cette thèse, nous nous intéressons aux liens dits d’identité, exprimant une équivalence entre deux ressources différentes décrivant la même entité du monde réel. Notre objectif principal est de proposer une approche de liage générique, traitant certains challenges, avec comme cas concret d’utilisation les données de DOREMUS.Dans cette thèse, nous nous focalisons sur trois principaux challenges : (1) réduire la configuration manuelle de l’outil de liage, (2) faire face à différents types d’hétérogénéité entre les descriptions, et (3) Supprimer l’ambiguïté entre les ressources très similaires dans leur descriptions mais qui ne sont pas équivalentes. Certaines approches de liage demandent souvent l’intervention de l’utilisateur pour configurer certains paramètres. Ceci peut s’avérer être une tâche coûteuse pour l’utilisateur qui peut ne pas être expert du domaine. Par conséquent, une des questions de recherche que nous nous posons est comment réduire autant que possible l’intervention humaine dans le processus de liage des données. De plus, les descriptions des ressources peuvent présenter diverses hétérogénéités qu’un outil doit savoir gérer. Par ailleurs, les descriptions peuvent être exprimées dans différentes langues naturelles, avec des vocabulaires différents ou encore avec des valeurs différentes. La comparaison peut alors s’avérer très difficile en raison des variations selon trois dimensions : basées sur les valeurs, ontologiques et logiques. Dans cette thèse, nous analysons les aspects d’hétérogénéité les plus récurrents en identifiant un ensemble de techniques qui peuvent leur être appliquées. Un autre défi est la distinction entre des descriptions de ressources fortement similaires mais non équivalentes. En leur présence, la plupart des outils existants se voient diminuer leur efficacité en terme de qualité, en générant beaucoup de faux positifs. Dans cette optique, certaines approches ont été proposées pour identifier un ensemble de propriétés discriminatives appelées des clefs. De telles approches découvrent un très grand nombre de clés. La question qui se pose est de savoir si toutes les clés permettent de découvrir les mêmes paires d’instances équivalentes, ou si certaines sont plus significatives que d'autres. Aucune approche ne fournit de stratégie pour classer les clefs générées en fonction de leur efficacité à découvrir les bons liens. Afin d’assurer des alignements de qualité, nous avons proposé dans ce travail une nouvelle approche de liage de données visant à relever les défis décrits ci-dessus.Un outil de liage automatique de données hétérogènes, nommé Legato, qui répond aux challenges évoqués précédemment a été développé. Il est basé sur la notion de profile d’instance représentant chaque ressource comme un document textuel de littéraux gérant une variété d’hétérogénéités de données sans l’intervention de l’utilisateur. Legato implémente également une étape de filtrage de propriétés dites problématiques permettant de nettoyer les données du bruit susceptible de rendre la tâche de comparaison difficile. Pour pallier au problème de distinction entre les ressources similaires dans leur description, Legato implémente un algorithme basé sur la sélection et le ranking des clefs afin d’améliorer considérablement la précision au niveau des liens générés
This thesis is part of the ANR DOREMUS project. We are interested in the catalogs of three cultural institutions: BNF (Bibliothèque Nationale de France), Philharmonie de Paris and Radio France, containing detailed descriptions about music works. These institutions have adopted the Semantic Web technologies with the aim of making these data accessible to all and linked.The links creation becomes particularly difficult considering the high heterogeneity between the descriptions of the same entity. In this thesis, our main objective is to propose a generic data linking approach, dealing with certain challenges, for a concrete application on DOREMUS data. We focus on three major challenges: (1) reducing the tool configuration effort, (2) coping with different kinds of data heterogeneities across datasets and (3) dealing with datasets containing blocks of highly similar instances. Some of the existing linking approaches often require the user intervention during the linking process to configure some parameters. This may be a costly task for theuser who may not be an expert in the domain. Therefore, one of the researchquestions that arises is how to reduce human intervention as much as possible inthe process of data linking. Moreover, the data can show various heterogeneitiesthat a linking tool has to deal with. The descriptions can be expressed in differentnatural languages, with different vocabularies or with different values. The comparison can be complicated due to the variations according to three dimensions: value-based, ontological and logical. Another challenge is the distinction between highly similar but not equivalent resource descriptions. In their presence, most of the existing tools are reduced in efficiency generating false positive matches. In this perspective, some approaches have been proposed to identify a set of discriminative properties called keys. Very often, such approaches discover a very large number of keys. The question that arises is whether all keys can discover the same pairs of equivalent instances, or ifsome are more meaningful than others. No approach provides a strategy to classify the keys generated according to their effectiveness to discover the correct links.We developed Legato — a generic tool for automatic heterogeneous data linking.It is based on instance profiling to represent each resource as a textual documentof literals dealing with a variety of data heterogeneities. It implementsa filtering step of so-called problematic properties allowing to clean the data ofthe noise likely to make the comparison task difficult. To address the problem ofsimilar but distinct resources, Legato implements a key ranking algorithm calledRANKey
Styles APA, Harvard, Vancouver, ISO, etc.
6

Szydlarski, Mikolaj. « Algebraic Domain Decomposition Methods for Darcy flow in heterogeneous media ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00550728.

Texte intégral
Résumé :
Afin de répondre aux besoins de l'industrie pétrolière d'une description plus fine de la géométrie et des propriétés pétrophysiques des bassins et des réservoirs, la simulation numérique des écoulements en milieux poreux doit évoluer vers des algorithmes plus performants et plus robustes vis à vis de la taille des simulations, de la complexité des maillages et des hétérogénéités du milieu poreux. Les méthodes de décomposition de domaine constituent une alternative aux méthodes multigrilles et pourraient permettre de lever les difficultés précédentes en terme de robustesse et d'efficacité sur architectures parallèles. Elles sont par nature plus adaptées au calcul parallèle et sont plus robustes en particulier lorsque les sous domaines sont résolus par des méthodes directes. Elles permettent aussi de traiter dans un cadre unique les couplages de modèles comme les puits ou les failles conductrices et s'étendent au cas des systèmes couplés. Le travail de thèse traite plus particulièrement de méthodes définies au niveau algébrique. On ne suppose pas avoir une connaissance préalable du problème continu dont la matrice provient. On n'a pas non plus accés aux matrices avant assemblage. Ce manque d'informations a priori rend plus difficile la construction de méthodes efficaces. On propose deux nouvelles méthodes de construction de méthodes de décomposition de domaine au niveau algébrique: la construction de conditions d'interface optimisées et d'une grille grossière. Ce dernier point est particulièrement important pour avoir des méthodes robustes vis à vis du nombre des sous-domaines. Les méthodes sont adaptatives et basées sur l'analyse de l'espace de Krylov généré durant les premières itérations de la méthode de Schwarz classique. A partir des vecteurs de Ritz correspondant aux plus basses valeurs propres, on construit des conditions d'interface et des grilles grossières qui annihilent l'erreur sur ces composantes. Les méthodes ont été testées sur des calculateurs parallèles pour des matrices issues de la simulation de milieux poreux.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Martini, Immanuel [Verfasser]. « Reduced Basis Approximation for Heterogeneous Domain Decomposition Problems / Immanuel Martini ». München : Verlag Dr. Hut, 2018. http://d-nb.info/1153254751/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Chadha, Sanchit. « Supporting Heterogeneous Device Development and Communication ». Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/64434.

Texte intégral
Résumé :
To increase market penetration, mobile software makers support their popular applications on all major software platforms, which currently include Android, iOS, and Windows Phone. Although these platforms often offer a drastically different look and feel, cross-platform applications deliver the same core functionality to the end user. Maintaining and evolving such applications currently requires replicating all the changes across all supported variants, a laborious and intellectually taxing enterprise. The state-of-the-practice automated source translation tools fall short, as they are incapable of handling the structural and idiomatic differences of the software frameworks driving major mobile platforms. In addition, popular mobile applications increasingly make use of distributed resources. Certain domains, including social networking, productivity enhancement, and gaming, require different application instances to continuously exchange information with each other. The current state of the art in supporting communication across heterogeneous mobile devices requires the programmer to write platform-specific, low-level API calls that are hard not only to develop but also to evolve and maintain. This thesis reports on the findings of two complementary research activities, conducted with the goal of facilitating the development and communication across heterogeneous mobile devices: (1) a programming model and runtime support for heterogeneous device-to-device communication across mobile applications; (2) a source code recommendation system that synthesizes code snippets from web-based programming resources, based on the functionality written for Android or iOS and vice versa. The conceptual and practical advancements of this research have potential to benefit fellow researchers as well as mobile software developers and users.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gong, Rulan. « Mixing-controlled reactive transport in connected heterogeneous domains ». Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50365.

Texte intégral
Résumé :
Reactive transport models are essential tools for predicting contaminant fate and transport in the subsurface and for designing effective remediation strategies. Sound understanding of subsurface mixing in heterogeneous porous media is the key for the realistic modeling of reactive transport. This dissertation aims to investigate the extent of mixing and improve upscaling effective macroscopic models for mixing-controlled reactive transport in connected heterogeneous formations, which usually exhibit strongly anomalous transport behavior. In this research, a novel approach is developed for an accurate geostatistical characterization of connected heterogeneous formations transformed from Gaussian random fields. Numerical experiments are conducted in such heterogeneous fields with different connectivity to investigate the performance of macroscopic mean transport models for simulating mixing-controlled reactive transport. Results show that good characterization of anomalous transport of a conservative tracer does not necessarily mean that the models may characterize mixing well and that, consequently, it is questionable that the models capable of characterizing anomalous transport behavior of a conservative tracer are appropriate for simulating mixing-controlled reactive transport. In connected heterogeneous fields with large hydraulic conductivity variances, macroscopic mean models ignoring concentration variations yield good prediction, while in fields with intermediate conductivity variances, the models must consider both the mean concentration and concentration variations, which are very difficult to evaluate both theoretically and experimentally. An innovative and practical approach is developed by combining mean conservative and reactive breakthrough curves for estimating concentration variations, which can be subsequently used by variance transport models for prediction. Furthermore, a new macroscopic framework based on the dual-permeability conceptualization is developed for describing both mean and concentration variation for mixing-controlled reactive transport. The developed approach and models are validated by numerical and laboratory visualization experiments. In particular, the new dual-permeability model demonstrates significant improvement for simulating mixing-controlled reactive transport in heterogeneous media with intermediate conductivity variances. Overall, results, approaches and models from this dissertation advance the understanding of subsurface mixing in anomalous transport and significantly improve the predictive ability for modeling mixing-controlled reactive transport in connected heterogeneous media.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Razzaghi, Kouchaksaraei Hadi [Verfasser]. « Orchestrating network services using multi-domain, heterogeneous resources / Hadi Razzaghi Kouchaksaraei ». Paderborn : Universitätsbibliothek, 2020. http://d-nb.info/1222587939/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Smith, Daniel Alexander. « Exploratory and faceted browsing, over heterogeneous and cross-domain data sources ». Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/195005/.

Texte intégral
Résumé :
Exploration of heterogeneous data sources increases the value of information by allowing users to answer questions through exploration across multiple sources; Users can use information that has been posted across the Web to answer questions and learn about new domains. We have conducted research that lowers the interrogation time of faceted data, by combining related information from different sources. The work contributes methodologies in combining heterogenous sources, and how to deliver that data to a user interface scalably, with enough performance to support rapid interrogation of the knowledge by the user. The work also contributes how to combine linked data sources so that users can create faceted browsers that target the information facets of their needs. The work is grounded and proven in a number of experiments and test cases that study the contributions in domain research work.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Inggs, Gordon. « Portable, predictable and partitionable : a domain specific approach to heterogeneous computing ». Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/31595.

Texte intégral
Résumé :
Computing is increasingly heterogeneous. Beyond Central Processing Units (CPUs), different architectures such as massively parallel Graphics Processing Units (GPUs) and reconfigurable Field Programmable Gate Arrays (FPGAs) are seeing widespread adoption. However, the failure of conventional programming approaches to support portable execution, predict the runtime characteristics and partition workloads optimally is hindering the realisation of heterogeneous computing. By narrowing the scope of expression in a natural manner, using a domain specific approach, these three challenges can be addressed. A domain specific heterogeneous computing methodology enables three features: Portability, Prediction and Partitioning. Portable, efficient execution is enabled by a domain specific approach because only a subset of domain functions need to be supported across the heterogeneous computing platforms. Predictive models of runtime characteristics are enabled as the structure of the domain functions may be analysed a priori. Finally optimal partitioning is possible because the metric models can be used to form an optimisation program that can be solved by either heuristic, machine learning or Mixed Integer Linear Programming (MILP) approaches. Using the example of the application domain of financial derivatives pricing, a domain specific application framework, the Forward Financial Framework (F^3), can execute a single pricing task upon a diverse range of CPU, GPU and FPGA platforms from many different vendors. Not only do these portable implementations exhibit strong parallel scaling, but are competitive with state-of-the-art, expert created implementations of the same option pricing problems. Furthermore, F^3 can model the crucial runtime metrics of latency and accuracy for these heterogeneous platforms using a small benchmarking procedure to within 10% of the run-time value of these metrics. Finally, the framework can optimally partition work across heterogeneous platforms, using a MILP framework, that is up to 270 times more efficient than what is achieved by using a heuristic approach.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Pegoraro, Adrian. « Modelling heterogeneous nonlinear subwavelength systems with the finite difference time domain method ». Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/27007.

Texte intégral
Résumé :
Predicting the response of heterogeneous nonlinear microscopic systems to laser excitation is very difficult using analytical techniques and is only feasible under simplifying assumptions. However, using numerical methods, it is possible to analyze arbitrary systems and make predictions about their behaviour. This information may be used to develop new techniques and a better understanding of measurements. One area which stands to benefit from such methods is nonlinear microscopy. We use finite difference time domain methods to explain experimental measurements and also develop a new nonlinear microscopy technique which shows a significant improvement in axial resolution over traditional techniques. We then explore the behaviour of Maxwell Garnett nanocomposites and illustrate the limitations of the current theoretical models for these systems.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Rezwan, Shahid Muhammad. « Standardizing heterogeneous IT data for the purpose of automated modeling using domain ontologies ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234910.

Texte intégral
Résumé :
Models in Enterprise Architecture (EA) are used to deal with the architectural complexity inan organization. Modeling Enterprise IT (Information Technology) Architecture, which is apart of EA, requires various kinds of information about the objects in the IT infrastructure ofan enterprise. It is a big challenge to acquire the required information when the models arecreated. It is at the same time difficult and the time taken for it cannot be controlled. This iswhere automation of the EA IT modeling becomes particularly important. For this purpose,operational IT data from different sources in the network can be used. Since these data areheterogeneous in nature, comprehending these data needs recognizing its semantic meaning.To recognize the semantic meaning of the data in relation to the real world, one has to useontology. When this semantic meaning is known, the data can then be standardized to acommon representation. Common representation is necessary for ensuring quality of thedata in the merging process. This thesis work attempts to focus on the important area of EAIT model automation by proposing an approach that tries to standardize the heterogeneousIT architecture data in the network with the help of domain ontology.
Modeller i Enterprise Arkitektur (EA) används för att hantera den arkitektoniska komplexiteten i en organisation. Modellering Enterprise IT (Informationsteknologi) Arkitektur, som ingår i EA, kräver olika typer av information om objekten i ett företags IT- infrastruktur. Det är en stor utmaning att förvärva den information som krävs när modellerna skapas. Det är samtidigt svårt och tidsåtgången är svår att kontrollera. Det är här automatiseringen av EA IT-modelleringen blir särskilt viktig. För detta ändamål kan operativa IT-data från olika källor i nätverket användas. Eftersom dessa data är heterogena, behöver förståelsen för dessa data erkänna dess semantiska mening. För att känna igen den semantiska meningen med data i förhållande till den verkliga världen måste man använda ontologi. När denna semantiska mening är känd, kan data sedan standardiseras till en gemensam representation. En gemensam representation är nödvändig för att säkerställa kvaliteten på uppgifterna i fusionsprocessen. Denna avhandling arbetar med att fokusera på det viktiga området för EA IT-modellautomatisering genom att föreslå ett tillvägagångssätt som försöker standardisera de heterogena IT-arkitekturdata i nätverket med hjälp av domänontologi
Styles APA, Harvard, Vancouver, ISO, etc.
15

COROLLI, LUCA. « Deterministic and Stochastic Optimization For Heterogeneous Decision Phases In The Air Traffic Domain ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50551.

Texte intégral
Résumé :
Scheduling is a complex activity that is needed in a large number of fields, which can involve heterogeneous factors and that may have different goals. In this thesis, scheduling problems that involve the air traffic field at different phases are faced. At each phase, the different characteristics of the problems are considered, devoting special attention to uncertainty. Given the heterogeneous characteristics and goals of the problems, different models and methods are proposed to solve each of them. The first analyzed phase is the strategic phase. It takes place around six months before the operation of flights, when they need to be assigned scheduled departure and arrival times at the airports where they operate. Future capacity realizations are very difficult to forecast at this phase, as capacity is influenced by weather conditions. A two-stage stochastic programming model with two alternative formulations is proposed to capture this uncertainty. Since the number of scenarios may be extreme, Sample Average Approximation is used to solve the model. The utilization of the proposed model allows to identify advantageous tradeoffs between schedule/request discrepancies, i.e., the distance between the allocated schedule and airline requests, and operational delays. This tradeoff can result in substantial reductions of the cost of delays for airlines. In the computational experiments, delays were reduced up to 45% on an instance representing a network of European airports. The second considered phase is the tactical phase, which takes place on the day of operation of flights. At this time, complete flight plans need to be defined, specifying the route and operation times of flights. This is done considering two different sources of uncertainty. First, uncertainty on capacity availability is taken into account, similarly to the strategic phase. However, the number of capacity realization scenarios is now small, as they can be defined using available weather forecasts. The problem of minimizing delay costs considering this source of uncertainty is the Stochastic Air Traffic Flow Management problem. A two-stage stochastic programming model with two alternative formulations is proposed to solve this problem. An ad-hoc heuristic that takes advantage of the good structure of the model is used to solve problem instances within short computation times. The analysis of the Value of the Stochastic Solution shows that the proposed stochastic model can significantly reduce delay costs when bad weather affects the whole network. Second, the implicit uncertainty on the departure time of flights is taken into account. This kind of uncertainty involves operations that may cause a delay on the scheduled departure time of a flight. The flexibility of the scheduled time of departure, as well as the other flight operations, is determined defining time windows within which flights are granted capacity resources to operate. The narrower a time window, the more critical a flight operation. The problem of minimizing delay cost and maximizing time windows is faced by the Air Traffic Flow Management Problem with Time Windows. The problem is formulated with two alternative deterministic models, one of which is able to provide time windows in 40" on average for instances involving over 6,000 flights. Less conservative criteria to reserve capacity within time windows can also be used. Despite not granting the possibility for a flight to execute its operations at every instant of a time window, the implementation of these alternative criteria is shown to be viable. In fact, less than 0.14% of flights were subject to capacity shortages in the analyzed cases. Finally, the operational phase takes place when operations are being executed. The goal at this phase is to manage the final departure times announced by flights - with uncertain information becoming deterministic - allowing them to depart at the announced time even if this time exceeds the assigned departure time window. This problem is named Real Time Flight Rescheduling with Time Windows. Resources are provided to flights that need them by reallocating previously reserved capacity with an algorithm that follows the Ration-By-Schedule mechanism. Both the practical usage of time windows and the impact of collaboration among airlines are studied. While airline collaboration limits time window flexibility up to some time before the scheduled departure of a flight, it can allow to reduce additional flight delays by over 14%. This thesis is a first work that involves determining flight schedules from the moment of their definition to the time of execution of flights. Providing cost reductions by considering the different factors that influence each decision phase can lead to a global improvement of the management of flight operations, whose delays are very expensive in practice for airlines.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Zwicklbauer, Stefan [Verfasser], et Michael [Akademischer Betreuer] Granitzer. « Robust Entity Linking in Heterogeneous Domains / Stefan Zwicklbauer ; Betreuer : Michael Granitzer ». Passau : Universität Passau, 2017. http://d-nb.info/1144611385/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Reiche, Oliver [Verfasser]. « A Domain-Specific Language Approach for Designing and Programming Heterogeneous Image Systems / Oliver Reiche ». München : Verlag Dr. Hut, 2018. http://d-nb.info/1168534674/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Ye, Ming. « Parallel finite element algorithm for transient flow in bounded randomly heterogeneous domains ». Diss., The University of Arizona, 2002. http://hdl.handle.net/10150/280226.

Texte intégral
Résumé :
We consider the effect of randomness of hydraulic conductivities K(x) on numerical predictions, without resorting to either Monte Carlo simulation, of transient flow in bounded domains driven by random source, initial and boundary terms. Our aim is to allow optimum unbiased prediction of hydraulic heads h(x, t) and fluxes q(x,t) by means of their respective ensemble moments, c and < q(x,t)>c, conditioned on measurements of K(x). These predictors have been shown by Tartakovsky and Neuman (1998) to satisfy exactly a space-time nonlocal (integro-differential) conditional mean flow equation in which < q(x,t)>c is generally non-Darcian. Exact nonlocal equations have been obtained for second conditional moments of head and flux that serve as measures of predictive uncertainty. The authors developed recursive closure approximations for the first and second conditional moment equations through expansion in powers of a small parameter σᵧ , which represents the standard estimation error of ln K(x). The authors explored the possibility of localizing the exact moment equations in real, Laplace- and/or infinite Fourier-transformed domains. In this paper we show how to solve recursive closure approximations of nonlocal first and second conditional moment equations numerically, to first order in σ²ᵧ, in a bounded two-dimensional domain. Our solution is based on Laplace transformation of the moment equations, parallel finite element solution in the complex Laplace domain, and numerical inversion of the solution from the Laplace to the real time domain. We present a detailed comparison between numerical solutions of nonlocal and localized moment equations, and Monte Carlo simulations, under superimposed mean-uniform and convergent flow regimes in two dimensions. The results are shown to compare very well for variances σ²ᵧ as large as 4. The degree to which parallelization enhances computational efficiency is explored.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Holt, Jennifer Jane. « Finite difference time domain modeling of dispersion from heterogeneous ground properties in ground penetrating radar ». Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1080136001.

Texte intégral
Résumé :
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xxii, 212 p.; also includes graphics. Includes abstract and vita. Advisor: Jeffrey Daniels, Dept. of Geological Sciences. Includes bibliographical references (p. 152-154).
Styles APA, Harvard, Vancouver, ISO, etc.
20

Hendili, Sofiane. « Structures élastiques comportant une fine couche hétérogénéités : étude asymptotique et numérique ». Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20051/document.

Texte intégral
Résumé :
Cette thèse est consacrée à l'étude de l'influence d'une fine couche hétérogène sur le comportement élastique linéaire d'une structure tridimensionnelle.Deux types d'hétérogénéités sont pris en compte : des cavités et des inclusions élastiques. Une étude complémentaire, dans le cas d'inclusions de grande rigidité, a été réalisée en considérant un problème de conduction thermique.Une analyse formelle par la méthode des développements asymptotiques raccordés conduit à un problème d'interface qui caractérise le comportement macroscopique de la structure. Le comportement microscopique de la couche est lui déterminé sur une cellule de base. Le modèle asymptotique obtenu est ensuite implémenté dans un code éléments finis. Une étude numérique permet de valider les résultats de l'analyse asymptotique
This thesis is devoted to the study of the influence of a thin heterogeneous layeron the linear elastic behavior of a three-dimensional structure. Two types of heterogeneties are considered : cavities and elastic inclusions. For inclusions of high rigidty a further study was performed in the case of a heat conduction problem.A formal analysis using the matched asymptotic expansions method leads to an interface problem which characterizes the macroscopic behavior of the structure. The microscopic behavior of the layer is determined in a basic cell.The asymptotic model obtained is then implemented in a finite element software.A numerical study is used to validate the results of the asymptotic analysis
Styles APA, Harvard, Vancouver, ISO, etc.
21

Lubenko, Ivans. « Towards robust steganalysis : binary classifiers and large, heterogeneous data ». Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c1ae44b8-94da-438d-b318-f038ad6aac57.

Texte intégral
Résumé :
The security of a steganography system is defined by our ability to detect it. It is of no surprise then that steganography and steganalysis both depend heavily on the accuracy and robustness of our detectors. This is especially true when real-world data is considered, due to its heterogeneity. The difficulty of such data manifests itself in a penalty that has periodically been reported to affect the performance of detectors built on binary classifiers; this is known as cover source mismatch. It remains unclear how the performance drop that is associated with cover source mismatch is mitigated or even measured. In this thesis we aim to show a robust methodology to empirically measure its effects on the detection accuracy of steganalysis classifiers. Some basic machine-learning based methods, which take their origin in domain adaptation, are proposed to counter it. Specifically, we test two hypotheses through an empirical investigation. First, that linear classifiers are more robust than non-linear classifiers to cover source mismatch in real-world data and, second, that linear classifiers are so robust that given sufficiently large mismatched training data they can equal the performance of any classifier trained on small matched data. With the help of theory we draw several nontrivial conclusions based on our results. The penalty from cover source mismatch may, in fact, be a combination of two types of error; estimation error and adaptation error. We show that relatedness between training and test data, as well as the choice of classifier, both have an impact on adaptation error, which, as we argue, ultimately defines a detector's robustness. This provides a novel framework for reasoning about what is required to improve the robustness of steganalysis detectors. Whilst our empirical results may be viewed as the first step towards this goal, we show that our approach provides clear advantages over earlier methods. To our knowledge this is the first study of this scale and structure.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Xue, Weicheng. « CPU/GPU Code Acceleration on Heterogeneous Systems and Code Verification for CFD Applications ». Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102073.

Texte intégral
Résumé :
Computational Fluid Dynamics (CFD) applications usually involve intensive computations, which can be accelerated through using open accelerators, especially GPUs due to their common use in the scientific computing community. In addition to code acceleration, it is important to ensure that the code and algorithm are implemented numerically correctly, which is called code verification. This dissertation focuses on accelerating research CFD codes on multi-CPUs/GPUs using MPI and OpenACC, as well as the code verification for turbulence model implementation using the method of manufactured solutions and code-to-code comparisons. First, a variety of performance optimizations both agnostic and specific to applications and platforms are developed in order to 1) improve the heterogeneous CPU/GPU compute utilization; 2) improve the memory bandwidth to the main memory; 3) reduce communication overhead between the CPU host and the GPU accelerator; and 4) reduce the tedious manual tuning work for GPU scheduling. Both finite difference and finite volume CFD codes and multiple platforms with different architectures are utilized to evaluate the performance optimizations used. A maximum speedup of over 70 is achieved on 16 V100 GPUs over 16 Xeon E5-2680v4 CPUs for multi-block test cases. In addition, systematic studies of code verification are performed for a second-order accurate finite volume research CFD code. Cross-term sinusoidal manufactured solutions are applied to verify the Spalart-Allmaras and k-omega SST model implementation, both in 2D and 3D. This dissertation shows that the spatial and temporal schemes are implemented numerically correctly.
Doctor of Philosophy
Computational Fluid Dynamics (CFD) is a numerical method to solve fluid problems, which usually requires a large amount of computations. A large CFD problem can be decomposed into smaller sub-problems which are stored in discrete memory locations and accelerated by a large number of compute units. In addition to code acceleration, it is important to ensure that the code and algorithm are implemented correctly, which is called code verification. This dissertation focuses on the CFD code acceleration as well as the code verification for turbulence model implementation. In this dissertation, multiple Graphic Processing Units (GPUs) are utilized to accelerate two CFD codes, considering that the GPU has high computational power and high memory bandwidth. A variety of optimizations are developed and applied to improve the performance of CFD codes on different parallel computing systems. The program execution time can be reduced significantly especially when multiple GPUs are used. In addition, code-to-code comparisons with some NASA CFD codes and the method of manufactured solutions are utilized to verify the correctness of a research CFD code.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Park, Sung Hee. « Discipline-Independent Text Information Extraction from Heterogeneous Styled References Using Knowledge from the Web ». Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/52860.

Texte intégral
Résumé :
In education and research, references play a key role. They give credit to prior works, and provide support for reviews, discussions, and arguments. The set of references attached to a publication can help describe that publication, can aid with its categorization and retrieval, can support bibliometric studies, and can guide interested readers and researchers. If suitably analyzed, that set can aid with the analysis of the publication itself, especially regarding all its citing passages. However, extracting and parsing references are difficult problems. One concern is that there are many styles of references, and identifying what style was employed is problematic, especially in heterogeneous collections of theses and dissertations, which cover many fields and disciplines, and where different styles may be used even in the same publication. We address these problems by drawing upon suitable knowledge found in the WWW. In particular, we use appropriate lists (e.g., of names, cities, and other types of entities). We use available information about the many reference styles found, in a type of reverse engineering. We use available references to guide machine learning. In particular, we research a two-stage classifier approach, with multi-class classification with respect to reference styles, and partially solve the problem of parsing surface representations of references. We describe empirical evidence for the effectiveness of our approach and plans for improvement of our method.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Curran, Kevin. « Dynamic reconfiguration of IP domain middleware stacks to support multicast multimedia distribution in a heterogeneous environment ». Thesis, University of Ulster, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413873.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Bussoli, Ilaria. « Heterogeneous Graphical Models with Applications to Omics Data ». Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3423293.

Texte intégral
Résumé :
Thanks to the advances in bioinformatics and high-throughput methodologies of the last decades, a large unprecedented amount of biological data coming from various experiments in metabolomics, genomics and proteomics is available. This has lead the researchers to conduct more and more comprehensive molecular proling of biological samples through different multiple aspects of genomic activities, thus introducing new challenges in the developments of statistical tools to integrate and model multi-omics data. The main research objective of this thesis is to develop a statistical framework for modelling the interactions between genes when their activity is measured on different domains; to do so, our approach relies on the concept of multilayer network, and how structures of this type can be combined with graphical models for mixed data, i.e., data comprising variables of different nature (e.g., continuous, categorical, skewed, to name a few). We further develop an algorithm for learning the structure of the undirected multilayer networks underlying the proposed models, showing its promising results through empirical analyses on cancer data, which was downloaded from the public TCGA consortium.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Mohapi, Lerato Jerfree. « A domain specific language for facilitating automatic parallelization and placement of SDR patterns into heterogeneous computing architectures ». Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/26860.

Texte intégral
Résumé :
This thesis presents a domain-specific language (DSL) for software defined radio (SDR) which is referred to as OptiSDR. The main objective of OptiSDR is to facilitate the development and deployment of SDR applications into heterogeneous computing architectures (HCAs). As HCAs are becoming mainstream in SDR applications such as radar, radio astronomy, and telecommunications, parallel programming and optimization processes are also becoming cumbersome, complex, and time-consuming for SDR experts. Therefore, the OptiSDR DSL and its compiler framework were developed to alleviate these parallelization and optimization processes together with developing execution models for DSP and dataflow models of computation suitable for SDR-specific computations. The OptiSDR target HCAs are composed of graphics processing units (GPUs), multi-core central processing units (MCPUs), and field programmable gate arrays (FPGAs). The methodology used to implement the OptiSDR DSL involved an extensive review process of existing SDR tools and the extent to which they address the complexities associated with parallel programming and optimizing SDR applications for execution in HCAs. From this review process, it was discovered that, while HCAs are used to accelerate many SDR computations, there is a shortage of intuitive parallel programming frameworks that efficiently utilize the HCAs' computing resources for achieving adequate performance for SDR applications. There were, however, some very good general-purpose parallel programming frameworks identied in the literature review, including Python based tools such as NumbaPro and Copperhead, as well as the prevailing Delite embedded DSL compiler framework for heterogeneous targets. The Delite embedded DSL compiler framework motivated and powered the OptiSDR compiler development in that, it provides four main compiler development capabilities that are desired in OptiSDR: 1) Generic data parallel executable patterns; 2) Execution semantics for heterogeneous MCPU-GPU run-time; 3) Abstract syntax creation using intermediate representations (IR) nodes; and 4) Extensibility for defining new syntax for other domains. The OptiSDR DSL design processes using this Delite framework involved designing the new structured parallel patterns for DSP algorithms (e.g. FIR, FFT, convolution, correlation, etc.), dataflow models of computation (MoC), parallel loop optimizations (tiling and space splitting), and optimal memory access patterns. Advanced task and data parallel patterns were applied in the OptiSDR dataflow MoCs, which are especially suitable for SDR computations where FPGA-based realtime data acquisition systems feed data into multi-GPUs for implementation of parallel DSP algorithms. Furthermore, the research methodology involved an evaluation process that was used to determine the OptiSDR language's expressive power, efficiency, performance, accuracy, and ease of use in SDR applications, such as radar pulse compression and radio frequency sweeping algorithms. The results include measurements of performance and accuracy, productivity versus performance, and real-time processing speeds and accuracy. The performance of some of the regularly used modules, such as FFT-based Hilbert and cross-correlation was found to be very high, with computations speeds ranging from 70.0 GFLOPS to 72.6 GFLOPS, and speedups of up to 80× compared to sequential C/C++ programs and 50× for Matlab's parallel loops. Accuracy was favourable in most cases favourable. For instance, OptiSDR Octave-like DSP instantiations were found to be accurate, with L2 norm forward-errors ranging from 10⁻¹³ to 10⁻¹⁶for smaller and bigger SDR programs respectively. It can therefore be concluded from the analysis in this thesis that the objectives, which include alleviating the complexities in parallel programming and optimizing SDR applications for execution in HCAs, were met. Moreover, the following hypothesis was validated, namely: "It is possible to design a DSL to facilitate the development of SDR applications and their deployment on HCAs without significant degradation of software performance, and with possible improvement in the automatically emitted low-level source code quality.". It was validated by; 1) Defining the OptiSDR attributes such as parallel DSP patterns and dataflow MoCs; 2) Providing parameterizable SDR modules with automatic parallelization and optimization for performance and accuracy; and 3) Presenting a set of intuitive validation constructs for accuracy testing using root-mean square error, and functional verification of DSP using two-dimensional graphics plotting for radar and real-time spectral analysis plots.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Roslund, Anton. « USING DOMAIN KNOWLEDGE FUNCTIONS TO ACCOUNT FOR HETEROGENEOUS CONTEXT FOR TASKS IN DECISION SUPPORT SYSTEMS FOR PLANNING ». Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41105.

Texte intégral
Résumé :
This thesis describes a way to represent domain knowledge as functions. Those functions can be composed and used for better predicting time needed for a task. These functions can aggregate data from different systems to provide a more complete view of the contextual environment without the need to consolidate data into one system. These functions can be crafted to make a more precise time prediction for a specific task that needs to be carried out in a specific context. We describe a possible way to structure and model data that could be used with the functions. As a proof of concept, a prototype was developed to test an envisioned scenario with simulated data. The prototype is compared to predictions using min, max and average values from previous experience. The result shows that domain knowledge, represented as functions can be used for improved prediction. This way of defining functions for domain knowledge can be used as a part of a CBR system to provide decision support in a problem domain where information about context is available. It is scalable in the sense that more context can be added to new tasks over time and more functions can be added and composed. The functions can be validated on old cases to assure consistency.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Maier, Siegfried [Verfasser], Jürgen [Gutachter] Saal et Dieter [Gutachter] Bothe. « Fluid Flow, Nonsmooth Domains, and Heterogeneous Catalysis / Siegfried Maier ; Gutachter : Jürgen Saal, Dieter Bothe ». Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2016. http://d-nb.info/1113747919/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Baranda, Hortigüela Jorge. « End-to-end network service orchestration in heterogeneous domains for next-generation mobile networks ». Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672782.

Texte intégral
Résumé :
5G marks the beginning of a deep revolution in the mobile network ecosystem, transitioning to a network of services to satisfy the demands of new players, the vertical industries. This revolution implies a redesign of the overall mobile network architecture where complexity, heterogeneity, dynamicity, and flexibility will be the rule. Under such context, automation and programmability are essential to support this vision and overcome current rigid network operation processes. Software Defined Networking (SDN), Network Function Virtualization (NFV) and Network slicing are key enabling techniques to provide such capabilities. They are complementary, but they are still in its infancy and the synergies between them must be exploited to realise the mentioned vision. The aim of this thesis is to further contribute to its development and integration in next generation mobile networks by designing an end-to-end (E2E) network service orchestration (NSO) architecture, which aligned with some guidelines and specifications provided by main standardization bodies, goes beyond current management and orchestration (MANO) platforms to fulfil network service lifetime requirements in heterogeneous multi-technology/administrative network infrastructures shared by concurrent instances of diverse network services. Following a bottom-up approach, we start studying some SDN aspects related to the management of wireless network elements and its integration into hierarchical control architectures orchestrating networking resources in a multi-technology (wireless, optical, packet) infrastructure. Then, this work is integrated in an infrastructure manager module executing the joint resource abstraction and allocation of network and compute resources in distributed points of presence (PoPs) connected by a transport network, aspect which is not (or lightly) handled by current MANO platforms. This is the module where the integration between NFV and SDN techniques is executed. This integration is commanded by a Service Orchestrator module, in charge of automating the E2E lifecycle management of network services implementing network slices (NS) based on the vertical requirements, the available infrastructure resources, and, while fulfilling service level agreement (SLA) also during run-time operation. This architecture, focused on single administrative domain (AD) scenarios, constitutes the first group of contributions of this thesis. The second group of contributions evolves this initial architecture to deal with the orchestration and sharing of NS and its network slice subnet instances (NSSIs) involving multiple ADs. The main differential aspect with current state-of-the-art solutions is the consideration of resource orchestration aspects during the whole orchestration process. This is fundamental to achieve the interconnection of NSSIs, hence making the E2E multi-domain orchestration and network slicing a reality in practice. Additionally, this work also considers SLA management aspects by means of scaling actions during run-time operation in such complex scenarios. The third group of contributions demonstrate the validity and applicability of the resulting architectures, workflows, and interfaces by implementing and evaluating them in real experimental infrastructures featuring multiple ADs and transport technologies interconnecting distributed computing PoPs. The performed experimentation considers network service definitions close to real vertical use cases, namely automotive and eHealth, which help bridging the gap between network providers and vertical industries stakeholders. Experimental results show that network service creation and scaling times in the order of minutes can be achieved for single and multi-AD scenarios, in line with 5G network targets. Moreover, these measurements serve as a reference for benchmarking the different operations involved during the network service deployment. Such analysis are limited in current literature.
5G marca el inicio de una gran revolución en las redes móviles, convirtiéndose en redes orientadas a servicios para satisfacer las demandas de nuevos actores, las industrias verticales. Esta revolución supone un rediseño total de la arquitectura de red donde la complejidad, heterogeneidad, dinamicidad y flexibilidad serán la norma. En este contexto, la automatización y programabilidad serán esenciales para superar los rígidos procesos actuales de operación de red. Las redes definidas por software (SDN), la virtualización de funciones de red (NFV) y el particionamiento de redes son técnicas clave para proporcionar dichas capacidades. Éstas son complementarias, pero aún recientes y sus sinergias se deben explotar para realizar la nueva visión. El objetivo de esta tesis es contribuir a su desarrollo e integración en la nuevas generaciones de redes móviles mediante el diseño de una arquitectura de orquestación de servicios de red (NSO) extremo a extremo (E2E), que alineada con algunas pautas y especificaciones de los principales organismos de estandarización, va más allá de los actuales sistemas de gestión y orquestación (MANO) para instanciar y garantizar los requisitos de los diversos servicios de red desplegados concurrentemente en infraestructuras heterogéneas compartidas que combinan múltiples tecnologías y dominios administrativos (AD). Siguiendo un enfoque ascendente, comenzamos a estudiar aspectos de SDN relacionados con la gestión de elementos de red inalámbricos y su integración en arquitecturas jerárquicas de orquestación de recursos de red en infraestructuras multi tecnología (inalámbrica, óptica, paquetes). Luego, este trabajo se integra en un módulo de administración de infraestructura que ejecuta de forma conjunta la abstracción y la asignación de recursos de red y computación en múltiples puntos de presencia (PoP) distribuidos conectados por una red de transporte, aspecto que no está (o ligeramente) considerado por los actuales sistemas MANO. Este módulo ejecuta la integración de las técnicas NFV y SDN. Esta integración está dirigida por el módulo Orquestador de Servicios, que automatiza la gestión E2E del ciclo de vida de los servicios de red implementando las diferentes particiones de red en base a los requisitos de los verticales, los recursos de infraestructura disponibles y mientras cumple los acuerdos de nivel de servicio (SLA) durante la operación del servicio. Esta arquitectura, centrada en escenarios con un único AD, forma el primer grupo de contribuciones de esta tesis. El segundo grupo de contribuciones evoluciona esta arquitectura abordando la orquestación y compartición de particiones de red y sus componentes (NSSIs) en escenarios con múltiples AD. La consideración detallada de aspectos de orquestación de recursos es el principal aspecto diferencial con la literatura. Esto es fundamental para la interconexión de NSSIs, haciendo realidad la orquestación E2E y el particionamiento de red en escenarios con múltiples AD. Además, se considera la gestión de SLA mediante acciones de escalado durante la operación del servicio en los escenarios mencionados. El tercer grupo de contribuciones valida las arquitecturas, procedimientos e interfaces resultantes pues se han implementado y evaluado sobre infraestructuras experimentales reales que presentan múltiples AD y tecnologías de transporte interconectando PoP distribuidos. Esta experimentación considera definiciones de servicios de red cercanos a casos de uso de verticales reales, como automoción y eHealth, ayudando a cubrir la brecha entre los proveedores de red y los verticales. Los resultados experimentales muestran que la creación y el escalado de servicios de red se pueden realizar en pocos minutos en escenarios con un único o múltiples ADs, en línea con los indicadores de red objetivos de 5G. Estas medidas, escasas en la literatura actual, sirven como referencia para caracterizar las diferentes operaciones involucradas durante el despliegue de servicios.
Arquitectura de computadors
Styles APA, Harvard, Vancouver, ISO, etc.
30

Valiveti, Dakshina M. « INTEGRATED MULTISCALE CHARACTERIZATION AND MODELING OF DUCTILE FRACTURE IN HETEROGENEOUS ALUMINUM ALLOYS ». The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1253035787.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Mathaikutty, Deepak Abraham. « Functional Programming and Metamodeling frameworks for System Design ». Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32639.

Texte intégral
Résumé :
System-on-Chip (SoC) and other complex distributed hardware/software systems contain heterogeneous components whose behavior are best captured by different models of computations (MoCs). As a result, any system design framework for such systems requires the capability to express heterogeneous MoCs. Although a number of system level design languages (SLDL)s and frameworks have proliferated over the last few years, most of them are lacking in multiple ways. Some of the SLDLs and system design frameworks we have worked with are SpecC, Ptolemy II, SystemC-H, etc. From our analysis of these, we identify their following shortcomings: First, their dependence on specific programming language artifacts (Java or C/C++) make them less amenable to formal analysis. Second, the refinement strategies proposed in the design flows based on these languages lack formal semantics underpinnings making it difficult to prove that refinements preserve correctness, and third, none of the available SLDLs are easily customizable by users. In our work, we address these problems as follows: To alleviate the first problem, we follow Axel Jantschâ s paradigm of function-based semantic definitions of MoCs and formulate a functional programming framework called SML-Sys. We illustrate through a number of examples how to model heterogenous computing systems using SML-Sys. Our framework provides for formal reasoning due to its formal semantic underpinning inherited from SMLâ s precise denotational semantics. To handle the second problem and apply refinement strategies at a higher-level, we propose a refinement methodology and provide a semantics preserving transformation library within our framework. To address the third shortcoming, we have developed EWD, which allows users to customize MoC-specific visual modeling syntax defined as a metamodel. EWD is developed using a metamodeling framework GME (Generic Modeling Environment). It allows for automatic design-time syntactic and semantic checks on the models for conformance to their metamodel. Modeling in EWD facilitates saving the model in an XML-based interoperability language (IML) we defined for this purpose. The IML format is in turn automatically translated into Standard ML, or Haskell models. These may then be executed and analyzed either by our existing model analysis tools SMLSys, or the ForSyDe environment. We also generate SMV-based template from the XML representation to obtain verification models.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
32

Loo, Clement K. « Ecosystem Health Reconsidered ». University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1311605312.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Kosovac, Branka. « A framework for managing information from heterogeneous, distributed, and autonomous sources in the architecture, engineering, construction, and facilities management domain ». Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/30899.

Texte intégral
Résumé :
This dissertation proposes a framework that allows different efforts aiming to enhance information management in the architecture, engineering, construction, and facilities management (AEC/FM) industries, to coexist and support each other by sharing resources, services, and outputs. The main motivation for this research was the lack of support for non-routine tasks and business agility in information systems serving the domain. An extensive analysis of Information needs and available solutions identified the domain heterogeneity and complexity as key challenges for successful information management and efficient communication between a wide range of human and machine participants as a missing link. Suggesting that such communication needs to involve all components of human-to-human communication: syntax, semantics, and pragmatics, the existing information-management resources and approaches were analyzed within the semiotic framework, in order to identify shared simple elements that can be used to relate them. The proposed framework identifies three basic types of assertions: senses, relationships, and information, and their two properties: category and scope, as a set of basic elements that can be used to relate all kinds of semantic resources as well as information-management approaches based on linguistics, information-retrieval theory and practice, document structure, and knowledge representation. The framework enables consistent management of different types of information at any level of granularity and correlation of assertions involving information, its subject- and context-domains. A pilot implementation demonstrated on a small scale how the proposed framework can be used in practice. The envisioned system consists of numerous and diverse components that share their content via Web services using the proposed framework and a set of shared resources that include registries and specialized services offering senses (i.e. terminology mapping and resolution) and relationships (i.e. conceptualizations). The research uses a combination of constructive and exploratory method. The basic framework was validated by the ability to express all types of semantic resources and the pilot implementation by the comparison to a set of predefined requirements. However, the real benefits of the proposed framework can be proven only when it is used in combination with a variety of existing, emerging, and future techniques in complex real-world environments, as intended.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
34

Amin, Kareem [Verfasser], et Andreas [Akademischer Betreuer] Dengel. « DeepKAF : A Knowledge Intensive Framework for Heterogeneous Case-Based Reasoning in Textual Domains / Kareem Amin ; Betreuer : Andreas Dengel ». Kaiserslautern : Technische Universität Kaiserslautern, 2021. http://d-nb.info/1241537739/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Tayachi, Manel. « Couplage de modèles de dimensions hétérogènes et application en hydrodynamique ». Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM031/document.

Texte intégral
Résumé :
Les travaux de thèse présentés dans ce manuscrit portent sur l’étude d’électrodes de silicium, matériau prometteur pour remplacer le graphite en tant que matériau actif d’électrode négative pour accumulateur Li-ion. Les mécanismes de (dé)lithiation du silicium sont d’abord étudiés, par Spectroscopie des Electrons Auger (AES). En utilisant cette technique de caractérisation de surface, qui permet d’analyser les particules individuellement dans leur environnement d’électrode, nos résultats montrent que la première lithiation du silicium s’effectue selon un mécanisme biphasé cr-Si / a-Li3,1Si tandis que les processus de (dé)lithiation suivants apparaissent complètement différents et sont du type solution solide. Ces mécanismes d’insertion / désinsertion du lithium conduisent à des variations volumiques importantes des particules de matériau actif lors du cyclage, à l’origine d’une détérioration rapide des performances électrochimiques. En combinant plusieurs techniques de caractérisation, les mécanismes de dégradation d’une électrode de silicium sont étudiés au cours du vieillissement. En utilisant en particulier la spectroscopie d’impédance électrochimique et des analyses par porosimétrie mercure, une véritable dynamique de la porosité de l’électrode est mise en évidence lors du cyclage. Un modèle de dégradation, mettant en cause principalement l’instabilité de la Solid Electrolyte Interphase (SEI) à la surface des particules de silicium, est proposé. Pour tenter de stabiliser cette couche de passivation et ainsi améliorer les performances électrochimiques des électrodes de silicium, l’influence de deux paramètres est étudiée : l’électrolyte et le « domaine de lithiation » du silicium, ce dernier paramètre étant associé à l’évolution de la composition du matériau actif lors du cyclage. A l’issue de ces travaux, des performances prometteuses sont obtenues pour des accumulateurs Li-ion comprenant une électrode de silicium
The work presented here focuses on electrodes made of silicon, a promising material to replace graphite as an anode active material for Li-ion Batteries (LIBs). The first part of the manuscript is dedicated to the study of silicon (de)lithiation mechanisms by Auger Electron Spectroscopy (AES). By using this technique of surface characterization, which allows investigating individual particles in their electrode environment, our results show that the first silicon lithiation occurs through a two-phase region mechanism cr-Si / a-Li3,1Si, whereas the following (de)lithiation steps are solid solution type process. Upon (de)alloying with lithium, silicon particles undergo huge volume variations leading to a quick capacity fading. By combining several techniques of characterization, the failure mechanisms of a silicon electrode are studied during aging. In particular, by using electrochemical impedance spectroscopy and mercury porosimetry analyses, an impressive dynamic upon cycling of the electrode porosity is shown. A model, which mainly attributes the capacity fading to the Solid Electrolyte Interphase instability at the silicon particles surface, is proposed. To try to stabilize this passivation layer and thus improve silicon electrodes electrochemical performances, the influence of two parameters is studied: the electrolyte and the “lithiation domain” of silicon; the latter is associated with the evolution of the active material composition upon cycling. Finally, by using these last results, promising performances are obtained for silicon electrode containing LIBs
Styles APA, Harvard, Vancouver, ISO, etc.
36

Vara, Larsen Matias. « B-COoL : un métalangage pour la spécification des opérateurs de coordination des langages ». Thesis, Nice, 2016. http://www.theses.fr/2016NICE4013/document.

Texte intégral
Résumé :
Les appareils modernes sont constitués de plusieurs sous-systèmes de différentes sortes qui communiquent et interagissent. L'hétérogénéité de ces sous-systèmes et leurs interactions complexes rendent très délicate leur développement. L'approche d'ingénierie dirigée par les modèles apporte une solution en permettant l'expression de nombreux modèles structurels et comportementaux de natures très diverses. Dans ce contexte, il est nécessaire de construire un modèle unique qui intègre ces différents modèles afin d'y appliquer des méthodes de validation et de vérification pour permettre aux ingénieurs système de comprendre et de valider un comportement global. Cependant, la coordination manuelle des différents modèles qui composent le système est une opération source d'erreurs et les approches automatiques proposent des patrons de coordination ad-hoc pour certaines paires de langages. Dans ces approches, le patron de coordination est souvent encapsulé dans un outil dont il est difficile d'extraire les liens avec le système global. Cette thèse propose le Behavioral Coordination Operator Language (BCOoL), un langage dédié à la spécification de patrons de coordination entre des langages à partir de la définition d'opérateurs de coordination. Ces opérateurs sont employés afin d'automatiser la coordination de modèles exprimés dans ces langages. BCOoL est implémenté comme une suite de plugins qui s'appuient sur l'Eclipse Modeling Framework et présente ainsi un environnement complet pour l'exécution et la vérification de différents modèles coordonnés
Modern devices embed several subsystems with different characteristics that communicate and interact in many ways. This makes its development complex since a designer has to deal with the heterogeneity of each subsystem but also with the interaction between them. To tackle the development of complex systems, Model Driven Engineering promotes the use of various, possibly heterogeneous, structural and behavioral models. In this context, the coordination of behavioral models to produce a single integrated model is necessary to provide support for validation and verification. It allows system designers to understand and validate the global and emerging behavior of the system. However, the manual coordination of models is tedious and error-prone, and current approaches to automate the coordination are bound to a fixed set of coordination patterns. Moreover, they encode the pattern into a tool thus limiting reasoning on the global system behavior. In this thesis, we propose a Behavioral Coordination Operator Language (B-COoL) to reify coordination patterns between specific domains by using coordination operators between the Domain-Specific Modeling Languages used in these domains. Those operators are then used to automate the coordination of models conforming to these languages. B-COoL is implemented as plugins for the Eclipse Modeling Framework thus providing a complete environment to execute and verify coordinated models. We illustrate the use of B-COoL with the definition of coordination operators between timed finite state machines and activity diagrams. We then use these operators to coordinate and execute the heterogeneous models of a surveillance camera system
Styles APA, Harvard, Vancouver, ISO, etc.
37

Gustad, Håvard. « Implications on System Integration and Standardisation within Complex and Heterogeneous Organisational Domains : Difficulties and Critical Success Factors in Open Industry Standards Development ». Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9300.

Texte intégral
Résumé :

Numerous standardisation and integration initiatives within the use of information and communication technologies (ICT) seem to fail due to lack of acknowledging the socio-technical negotiation that goes into standardisation work. This thesis addresses the implication of open standards development within organisational use of ICT. A standardisation initiative for data transmission, the PRODML project, within the domain of the Oil & Gas industry is investigated. This initiative strives to increase interoperability between organisations as it focus on removing the use of proprietary standards. By using Actor-Network Theory, this thesis try to articulate how such standards emerge, and the critical factors that can lead to their success. It emphasis the need to consider the importance of aligning interests in standards development, and the importance of creating the right initial alliance, building an installed base, for increased credibility and public acceptance.

Styles APA, Harvard, Vancouver, ISO, etc.
38

Roßbach, André Christian. « Evaluation of Software Architectures in the Automotive Domain for Multicore Targets in regard to Architectural Estimation Decisions at Design Time ». Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-163372.

Texte intégral
Résumé :
In this decade the emerging multicore technology will hit the automotive industry. The increasing complexity of the multicore-systems will make a manual verification of the safety and realtime constraints impossible. For this reason, dedicated methods and tools are utterly necessary, in order to deal with the upcoming multicore issues. A lot of researchprojects for new hardware platforms and software frameworks for the automotive industry are running nowadays, because the paradigms of the “High-Performance Computing” and “Server/Desktop Domain” cannot be easily adapted for the embedded systems. One of the difficulties is the early suitability estimation of a hardware platform for a software architecture design, but hardly a research-work is tackling that. This thesis represents a procedure to evaluate the plausibility of software architecture estimations and decisions at design stage. This includes an analysis technique of multicore systems, an underlying graph-model – to represent the multicore system – and a simulation tool evaluation. This can guide the software architect, to design a multicore system, in full consideration of all relevant parameters and issues
In den nächsten Jahren wird die aufkommende Multicore-Technologie auf die Automobil-Branche zukommen. Die wachsende Komplexität der Multicore-Systeme lässt es nicht mehr zu, die Verifikation von Sicherheits- und Echtzeit-Anforderungen manuell auszuführen. Daher sind spezielle Methoden und Werkzeuge zwingend notwendig, um gerade mit den bevorstehenden Multicore-Problemfällen richtig umzugehen. Heutzutage laufen viele Forschungsprojekte für neue Hardware-Plattformen und Software-Frameworks für die Automobil-Industrie, weil die Paradigmen des “High-Performance Computings” und der “Server/Desktop-Domäne” nicht einfach so für die Eingebetteten Systeme angewendet werden können. Einer der Problemfälle ist das frühe Erkennen, ob die Hardware-Plattform für die Software-Architektur ausreicht, aber nur wenige Forschungs-Arbeiten berücksichtigen das. Diese Arbeit zeigt ein Vorgehens-Model auf, welches ermöglicht, dass Software-Architektur Abschätzungen und Entscheidungen bereits zur Entwurfszeit bewertet werden können. Das beinhaltet eine Analyse Technik für Multicore-Systeme, ein grundsätzliches Graphen-Model, um ein Multicore-System darzustellen, und eine Simulatoren Evaluierung. Dies kann den Software-Architekten helfen, ein Multicore System zu entwerfen, welches alle wichtigen Parameter und Problemfälle berücksichtigt
Styles APA, Harvard, Vancouver, ISO, etc.
39

Robino, Francesco. « A model-based design approach for heterogeneous NoC-based MPSoCs on FPGA ». Licentiate thesis, KTH, Elektroniksystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145521.

Texte intégral
Résumé :
Network-on-chip (NoC) based multi-processor systems-on-chip (MPSoCs) are promising candidates for future multi-processor embedded platforms, which are expected to be composed of hundreds of heterogeneous processing elements (PEs) to potentially provide high performances. However, together with the performances, the systems complexity will increase, and new high level design techniques will be needed to efficiently model, simulate, debug and synthesize them. System-level design (SLD) is considered to be the next frontier in electronic design automation (EDA). It enables the description of embedded systems in terms of abstract functions and interconnected blocks. A promising complementary approach to SLD is the use of models of computation (MoCs) to formally describe the execution semantics of functions and blocks through a set of rules. However, also when this formalization is used, there is no clear way to synthesize system-level models into software (SW) and hardware (HW) towards a NoC-based MPSoC implementation, i.e., there is a lack of system design automation (SDA) techniques to rapidly synthesize and prototype system-level models onto heterogeneous NoC-based MPSoCs. In addition, many of the proposed solutions require large overhead in terms of SW components and memory requirements, resulting in complex and customized multi-processor platforms. In order to tackle the problem, a novel model-based SDA flow has been developed as part of the thesis. It starts from a system-level specification, where functions execute according to the synchronous MoC, and then it can rapidly prototype the system onto an FPGA configured as an heterogeneous NoC-based MPSoC. In the first part of the thesis the HeartBeat model is proposed as a model-based technique which fills the abstraction gap between the abstract system-level representation and its implementation on the multiprocessor prototype. Then details are provided to describe how this technique is automated to rapidly prototype the modeled system on a flexible platform, permitting to adjust the system specification until the designer is satisfied with the results. Finally, the proposed SDA technique is improved defining a methodology to automatically explore possible design alternatives for the modeled system to be implemented on a heterogeneous NoC-based MPSoC. The goal of the exploration is to find an implementation satisfying the designer's requirements, which can be integrated in the proposed SDA flow. Through the proposed SDA flow, the designer is relieved from implementation details and the design time of systems targeting heterogeneous NoC-based MPSoCs on FPGA is significantly reduced. In addition, it reduces possible design errors proposing a completely automated technique for fast prototyping. Compared to other SDA flows, the proposed technique targets a bare-metal solution, avoiding the use of an operating system (OS). This reduces the memory requirements on the FPGA platform comparing to related work targeting MPSoC on FPGA. At the same time, the performance (throughput) of the modeled applications can be increased when the number of processors of the target platform is increased. This is shown through a wide set of case studies implemented on FPGA.

QC 20140609

Styles APA, Harvard, Vancouver, ISO, etc.
40

Parret-Fréaud, Augustin. « Estimation d'erreur de discrétisation dans les calculs par décomposition de domaine ». Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0022/document.

Texte intégral
Résumé :
Le contrôle de la qualité des calculs de structure suscite un intérêt croissant dans les processus de conception et de certification. Il repose sur l'utilisation d'estimateurs d'erreur, dont la mise en pratique entraîne un sur-coût numérique souvent prohibitif sur des calculs de grande taille. Le présent travail propose une nouvelle procédure permettant l'obtention d'une estimation garantie de l'erreur de discrétisation dans le cadre de problèmes linéaires élastiques résolus au moyen d'approches par décomposition de domaine. La méthode repose sur l'extension du concept d'erreur en relation de comportement au cadre des décompositions de domaine sans recouvrement, en s'appuyant sur la construction de champs admissibles aux interfaces. Son développement dans le cadre des approches FETI et BDD permet d'accéder à une mesure pertinente de l'erreur de discrétisation bien avant convergence du solveur lié à la décomposition de domaine. Une extension de la procédure d'estimation aux problèmes hétérogènes est également proposée. Le comportement de la méthode est illustré et discuté sur plusieurs exemples numériques en dimension 2
The control of the quality of mechanical computations arouses a growing interest in both design and certification processes. It relies on error estimators the use of which leads to often prohibitive additional numerical costs on large computations. The present work puts forward a new procedure enabling to obtain a guaranteed estimation of discretization error in the setting of linear elastic problems solved by domain decomposition approaches. The method relies on the extension of the constitutive relation error concept to the framework of non-overlapping domain decomposition through the recovery of admissible interface fields. Its development within the framework of the FETI and BDD approaches allows to obtain a relevant estimation of discretization error well before the convergence of the solver linked to the domain decomposition. An extension of the estimation procedure to heterogeneous problems is also proposed. The behaviour of the method is illustrated and assessed on several numerical examples in 2 dimension
Styles APA, Harvard, Vancouver, ISO, etc.
41

El, gharbi Yannis. « Une approche à deux niveaux pour le calcul de structures haute performance : décomposition -- maillage -- résolution ». Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST001.

Texte intégral
Résumé :
La simulation numérique représente encore un aspect minoritaire de la certification de pièces critiques dans l'industrie. Pourtant, elle permettrait de réaliser de fortes économies lors de la conception, en évitant de réaliser des essais onéreux sur des pièces réelles.En effet, lorsque le matériau est architecturé il existe des structures internes ayant un comportement mécanique radicalement différent en des zones distinctes de la structure, il devient difficile, voire impossible, de réaliser ces simulations en des temps raisonnables du fait du nombre important d'inconnues nécessaires à l'obtention d'une réponse fiable de la structure.Pour obtenir cette réponse, l'utilisation de méthodes de résolution parallèle de problèmes de grande taille est nécessaire. Les méthodes de décomposition de domaine, qui font partie de cette catégorie, sont les méthodes qui sont explorées durant cette thèse.L'objectif est donc de rendre possible ces simulations à l'aide de ces méthodes.En effet, la résolution du problème mais aussi le maillage de la structure deviennent coûteux et l'usage de méthodes parallèles devient indispensable.Pour cela, une méthode de sous-structuration à deux niveaux est proposée. Elle vise à produire en phase de préparation des données des sous-domaines réguliers et homogènes pouvant être maillés en parallèle. Par ailleurs, elle conduit à une forte réduction du conditionnement de problèmes à fortes hétérogénéités résolus par un solveur FETI. Une méthode de décomposition de domaine mixte avec impédance d'interface à deux niveaux adaptée à cette sous-structuration a ensuite pu être développée.L'objectif à long terme est, ici, de traiter des problèmes de complexité quasi-industrielle tels que des calculs à l'échelle de la structure complète sur des matériaux multi-échelles comme les composites tissés tridimensionnels utilisés de plus en plus intensivement dans l'industrie aéronautique par exemple
Numerical simulations represent a minor part of the certification proceess for critical parts in the industry. However, it would result in significant cost savings during conception phases, avoiding expensive real tests.Indeed, in cases of localized strong heterogeneities across all the structure, it becomes hard, if not impossible, to run successfully these simulations in reasonable times because of a too large number of unknowns needed for a reliable answer of the structure.To obtain this answer, large scale parallel solving methods are necessary. Domain decomposition methods, which are part of it, are the ones investigated during this thesis.The goal is to make these simulations possible thanks to domain decomposition methods.Indeed, the resolution of the problem but also the meshing of the structure become expensive and the use of parallel methods becomes essential.For this purpose, a two-level substructuring method is proposed. It aims at producing, during the pre-processing step, regular-shaped and homogeneous subdomains possibly meshed in parallel. In addition, it allows to a significant reduction of the condition number for strongly heterogeneous problems solved by a FETI solver. A mixed domain decomposition method with a two-level Robin condition which is adapted to this decomposition could then be developped.The long term objective is to deal with problems with a quasi-industrial complexity like computations at the global structural scale with multi-scale materials such as tridimensional woven composites which are used increasingly intensively in the aeronautical industry for instance
Styles APA, Harvard, Vancouver, ISO, etc.
42

Racaru, Stelian Florin. « Conception et validation d'une architecture de signalisation pour la garantie de qualité de service dans l'Internet multi-domaine, multi-technologie et multi-service ». Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000236/.

Texte intégral
Résumé :
Depuis quelques années, les évolutions technologiques conjointes de l’informatique et des télécommunications ont conduit à une modification substantielle des communications et des réseaux. Une des conséquences de ces progrès est la convergence vers une infrastructure unique de transfert de données. Porté par son développement continu, l’Internet (IP) se révèle comme solution pour l’interconnexion des différentes technologies hétérogènes, petite ou grande distance, fixe ou mobiles, l’infrastructure globale pour tout type de communication. De ce contexte résulte la problématique générale de nos travaux qui est de définir et de mettre en oeuvre des nouveaux mécanismes, protocoles et architectures pour répondre aux besoins des applications émergentes. Nos contributions s’inscrivent dans ce thème de la maîtrise de la garantie de la Qualité de Service (QoS) de bout en bout dans un environnement Internet hétérogène à plusieurs niveaux : multi domaine, multi technologie et multi service. Nous adressons le besoin des nouvelles architectures en signalisation inter domaine couplée au provisionnement et au contrôle d’admission pour répondre aux besoins du trafic et des services actuels. Dans ce cadre, nous avons participé à la conception, l'implémentation, le déploiement et la validation de l'architecture du projet européen IST EuQoS (« End-to-end Quality of Service support over heterogeneous networks »)
During the last years, computer science and telecommunications joint technological evolutions led to a perspective change in the area of communications and networks. One of the consequences of this progress is the convergence towards a sole infrastructure for data exchange. Due to its continuous development, Internet (IP) appears as the solution for interconnecting different heterogeneous technologies, short or long distance, fixe or mobile, the global infrastructure for communication transport. Internet supports many new types of applications: dynamic, multimedia, real time, distributed, potentially multi-user, mobile, such as voice over IP (VoIP), video on demand (VoD), visio conference, interactive games, etc. The general concerns addressed by our work result from this context. Our objective is to define and implement new mechanisms, protocols and architectures to answer the needs of emergent applications. Our proposals contribute to mastering the end-toend Quality of Service (QoS) in a multi-level heterogeneous environment, by addressing the current need of inter-domain signalling coupled with provisioning and admission control, to meet the traffic requirements. In this context, we participated in the design, development, deployment and validation of the architecture defined within the European project IST EuQoS (“End-to-end Quality of Service support over heterogeneous networks”)
Styles APA, Harvard, Vancouver, ISO, etc.
43

Xu, Minrui. « Synthèse et caractérisation de catalyseurs acido-basiques par greffage covalent sur supports carbonés. Application dans le domaine de la valorisation de molécules bio-sourcées ». Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2292/document.

Texte intégral
Résumé :
Les applications de catalyseurs homogènes sont généralement limitées dans les processus chimiques industriels en raison des déchets massifs et de la séparation du catalyseur avec les matières premières et les produits au cours des processus de production industrielle. En conséquence, l'hétérogénéisation de catalyseurs homogènes sur des supports solides a souvent été étudiée et s'est révélée plus prometteuse pour les applications industrielles. Néanmoins, non seulement la synthèse complexe du catalyseur, mais également les faibles charges de sites actifs et les faibles rendements qui en résultent contribuent au coût élevé des catalyseurs supportés. Pour remédier à la déficience de la catalyse sur support, les scientifiques ont travaillé sur la fonctionnalisation directe du support solide via la construction de liaisons covalentes. Parmi les approches de fonctionnalisation à l'étude, la chimie du diazonium développée par Griess devient de plus en plus populaire et attrayante puisque cette méthode prometteuse permet de greffer différentes motifs organiques sur divers supports solides. Au cours des dernières années, la fonctionnalisation des matériaux carbonés a été largement développée car le carbone est un matériau peu coûteux et largement disponible. Dans ce travail, la fonctionnalisation spontanée de matériaux carbonés (Darco KB-G; Norit SX2 et Vulcan XC72) a conduit à des solides basiques ou acides en greffant respectivement différents groupements aminophényle (phénylimidazole; N, N, diméthylaniline, aniline, phénylmorpholine) pour les solides basiques et le groupe phényle sulfonique pour les solides acides, via la chimie du diazonium.Les solides fonctionnalisés ont été caractérisés par différentes techniques (analyse élémentaire, adsorption-désorption de N2, ATG, MEB, FT-IR, XPS et Spectroscopie de Raman) et utilisés dans différentes réactions modèles pour valoriser des molécules plateformes bio-ressourcées, en particulier le furfural. Par exemple, la condensation de Knœvenagel entre le furfural et le malononitrile dans des conditions douces conduisant au 2-furanylméthylène malononitrile et à l’eau, illustre l’utilité et l’efficacité de ces nouveaux solides basiques. Le solide basique Darco-0.5PDA a montré une activité catalytique élevée lors de la condensation du furfural et du malononitrile par Knœvenagel dans des conditions expérimentales douces (40 °C, Patm). Les performances catalytiques des solides acides fonctionnalisés ont été évaluées dans la réaction d'acétalisation du dodécylaldéhyde avec l'éthylène glycol à 60 °C à pression atmosphérique dans des conditions classiques et PIC (Pickering Interfacial Catalysis) sans solvant. Les expériences montrent que les émulsions de Pickering dodécyl aldéhyde / éthylène glycol solides sont stabilisées par l’acide amphiphile synthétisé et démontrent une bonne stabilité et une bonne activité dans une acétalisation biphasique sans solvant
Homogeneous catalyst applications are usually limited in industrial chemical processes because massive wastes are produced and catalyst separation with raw materials and products is inconvenient during industrial production processes. As a result, the heterogenization of homogeneous catalysts onto solid supports was often investigated and proven to be more promising for industrial applications. Nevertheless, not only the tedious catalyst synthesis but also the low catalyst loadings and the resulting low efficiencies contribute to the high cost of supported catalysts. To remedy the deficiency of supported catalysis, scientists have worked on direct functionalization of solid support via covalent bond building. Among the approaches of functionalization being investigated, the chemistry of diazonium developed by Griess is becoming more and more popular and attractive since this promising method enables to graft different organic moieties onto various solid supports. During past few years, the functionalization of carbonaceous materials was widely developed because carbon is an inexpensive and extensively available material. In this study, the spontaneous functionalization of carbonaceous materials (Darco KB-G; Norit SX2 and Vulcan XC72) can lead to basic or acid solids by respectively grafting different aminophenyl groups (phenylimidazole; N,N,dimethylaniline, aniline, phenylmorpholine) for basic solids and phenyl sulfonic group for acid solids, via diazonium chemistry.The functionalized solids were characterized by different technics (elemental analysis, N2 adsorption-desorption, TGA, SEM, FT-IR, XPS and Raman spectroscopy) and used in different model reactions to upgrade bio-resourced platform molecules, especially furfural. For instance, the usefulness and effectiveness of these new basic solids are illustrated with the Knœvenagel condensation between furfural and malononitrile under mild conditions leading to 2-furanylmethylene malononitrile and water. The basic solid Darco-0.5PDA exhibited high activity to the Knœvenagel condensation of furfural and malononitrile under mild experimental conditions (40 °C, Patm). The catalytic performance of functionalized acid solids was assessed in the acetalization reaction of dodecyl aldehyde with ethylene glycol at 60 °C under atmospheric pressure in both conventional and solvent-free PIC (Pickering Interfacial Catalysis) conditions. The experiments evidenced that the synthesized amphiphilic acid solid stabilized dodecyl aldehyde/ethylene glycol Pickering emulsions and demonstrated both good stability and activity in a solvent-free biphasic acetalization
Styles APA, Harvard, Vancouver, ISO, etc.
44

Guessasm, Mohamed. « Contribution à la détermination des domaines de résistance de matériaux hétérogènes non périodiques ». Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE10010.

Texte intégral
Résumé :
Le travail realise a pour objet la determination des domaines de resistance macroscopiques de materiaux heterogenes non periodiques, dans le cadre de la theorie du calcul a la rupture. Afin de caracteriser le comportement non lineaire des materiaux aleatoirement heterogenes, le modele extremal heterogene (m. E. H. ) propose une formulation interessante dans le cas ou le comportement des materiaux constitutifs derive d'un potentiel. Un modele en contraintes, utilisant le cadre conceptuel du m. E. H. , est developpe. Ce modele concerne les materiaux dont le domaine de resistance est, soit convexe (dans ce cas il est identique au m. E. H. ), soit simplement etoile de centre l'origine des contraintes. Une application est realisee sur un materiau perfore aleatoirement. Le modele en contraintes adopte deux descriptions pour le materiau perfore. La premiere description est basee sur les fractions volumiques des materiaux constitutifs. La deuxieme description suppose que le materiau perfore aleatoirement est un agregat de plusieurs materiaux perfores periodiquement. Cette modelisation fournit une famille de domaines de resistance dependant d'un parametre d'heterogeneite r. Ce dernier est determine par calage des previsions numeriques aux resultats experimentaux pour une sollicitation donnee. Le domaine de resistance ainsi obtenu est valide par confrontation des previsions numeriques aux resultats experimentaux pour d'autres types de sollicitations. Parallelement a ce travail, des methodes de resolution sont developpees (a la fois simples et performantes) pour les problemes d'optimisation auxquels conduisent les approches en contraintes (du calcul a la rupture ou de l'homogeneisation). Sous des hypotheses relatives aux domaines de resistance, ces problemes d'optimisation, au depart sous conditions non lineaires, sont ramenes a des problemes d'infmax sans conditions avec une reduction significative du nombre de variables. Leur resolution est basee sur une methode de regularisation originale, effectuee sur une fonctionnelle independante de l'operateur max.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Lawson, Brodie Alexander James. « Cell migration and proliferation on homogeneous and non-homogeneous domains : modelling on the scale of individuals and populations ». Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61066/1/Brodie_Lawson_Thesis.pdf.

Texte intégral
Résumé :
Cell migration is a behaviour critical to many key biological effects, including wound healing, cancerous cell invasion and morphogenesis, the development of an organism from an embryo. However, given that each of these situations is distinctly different and cells are extremely complicated biological objects, interest lies in more basic experiments which seek to remove conflating factors and present a less complex environment within which cell migration can be experimentally examined. These include in vitro studies like the scratch assay or circle migration assay, and ex vivo studies like the colonisation of the hindgut by neural crest cells. The reduced complexity of these experiments also makes them much more enticing as problems to mathematically model, like done here. The primary goal of the mathematical models used in this thesis is to shed light on which cellular behaviours work to generate the travelling waves of invasion observed in these experiments, and to explore how variations in these behaviours can potentially predict differences in this invasive pattern which are experimentally observed when cell types or chemical environment are changed. Relevant literature has already identified the difficulty of distinguishing between these behaviours when using traditional mathematical biology techniques operating on a macroscopic scale, and so here a sophisticated individual-cell-level model, an extension of the Cellular Potts Model (CPM), is been constructed and used to model a scratch assay experiment. This model includes a novel mechanism for dealing with cell proliferations that allowed for the differing properties of quiescent and proliferative cells to be implemented into their behaviour. This model is considered both for its predictive power and used to make comparisons with the travelling waves which result in more traditional macroscopic simulations. These comparisons demonstrate a surprising amount of agreement between the two modelling frameworks, and suggest further novel modifications to the CPM that would allow it to better model cell migration. Considerations of the model’s behaviour are used to argue that the dominant effect governing cell migration (random motility or signal-driven taxis) likely depends on the sort of invasion demonstrated by cells, as easily seen by microscopic photography. Additionally, a scratch assay simulated on a non-homogeneous domain consisting of a ’fast’ and ’slow’ region is also used to further differentiate between these different potential cell motility behaviours. A heterogeneous domain is a novel situation which has not been considered mathematically in this context, nor has it been constructed experimentally to the best of the candidate’s knowledge. Thus this problem serves as a thought experiment used to test the conclusions arising from the simulations on homogeneous domains, and to suggest what might be observed should this non-homogeneous assay situation be experimentally realised. Non-intuitive cell invasion patterns are predicted for diffusely-invading cells which respond to a cell-consumed signal or nutrient, contrasted with rather expected behaviour in the case of random-motility-driven invasion. The potential experimental observation of these behaviours is demonstrated by the individual-cell-level model used in this thesis, which does agree with the PDE model in predicting these unexpected invasion patterns. In the interest of examining such a case of a non-homogeneous domain experimentally, some brief suggestion is made as to how this could be achieved.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Gilbert, François. « Descriptions thermo-mecaniques de milieux a plusieurs constituants et application aux milieux poreux satures ». Paris 6, 1987. http://www.theses.fr/1987PA066397.

Texte intégral
Résumé :
Etude de l'heterogeneite des milieux naturels. Modeles axiomatiques de milieux a plusieurs constituants. Approche variationnelle du comportement d'un solide poreux. Utilisation des methodes de changement d'echelle. Analyse de cellules autosimilaires
Styles APA, Harvard, Vancouver, ISO, etc.
47

Lachat, Cédric. « Conception et validation d'algorithmes de remaillage parallèles à mémoire distribuée basés sur un remailleur séquentiel ». Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00932602.

Texte intégral
Résumé :
L'objectif de cette thèse était de proposer, puis de valider expérimentalement, un ensemble de méthodes algorithmiques permettant le remaillage parallèle de maillages distribués, en s'appuyant sur une méthode séquentielle de remaillage préexistante. Cet objectif a été atteint par étapes : définition de structures de données et de schémas de communication adaptés aux maillages distribués, permettant le déplacement à moindre coût des interfaces entre sous-domaines sur les processeurs d'une architecture à mémoire distribuée ; utilisation d'algorithmes de répartition dynamique de la charge adaptés aux techniques parallèles de remaillage ; conception d'algorithmes parallèles permettant de scinder le problème global de remaillage parallèle en plusieurs sous-tâches séquentielles, susceptibles de s'exécuter concurremment sur les processeurs de la machine parallèle. Ces contributions ont été mises en oeuvre au sein de la bibliothèque parallèle PaMPA, en s'appuyant sur les briques logicielles MMG3D (remaillage séquentiel de maillages tétraédriques) et PT-Scotch (repartitionnement parallèle de graphes). La bibliothèque PaMPA offre ainsi les fonctionnalités suivantes : communication transparente entre processeurs voisins des valeurs portées par les noeuds, les éléments, etc. ;remaillage, selon des critères fournis par l'utilisateur, de portions du maillage distribué, en offrant une qualité constante, que les éléments à remailler soient portés par un unique processeur ou bien répartis sur plusieurs d'entre eux ; répartition et redistribution de la charge des maillages pour préserver l'efficacité des simulations après remaillage.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Chambard, Thierry. « Contribution à l'homogénéisation en plasticité pour une répartition aléatoire des hétérogénéités ». Grenoble 1, 1993. http://www.theses.fr/1993GRE10004.

Texte intégral
Résumé :
Ce travail a pour objet l'application a un composite fortement heterogene d'un modele micro-macro variationnel pour les materiaux heterogenes a comportement non lineaire et a repartition aleatoire des heterogeneites. Ce modele, dit modele extremal heterogene, realise, a l'aide d'un parametre d'heterogeneite r, une transition continue entre les deux bornes, inferieure et superieure, de reuss-voigt-hill. Le modele est applique a la recherche du critere de resistance d'un mortier renforce de fibres metalliques (m. F. M. ). Le comportement des materiaux constitutifs est considere rigide-plastique et le calcul s'effectue en contraintes planes. On est amene a definir successivement deux modelisations du m. F. M: (i) une modelisation a fraction volumique, simple a mettre en uvre, et (ii) une modelisation plus complete integrant la forme de la fibre et le comportement a l'interface fibre-mortier. La modelisation (ii) necessite le couplage de la theorie de l'homogeneisation des milieux periodiques et du modele extremal heterogene. Le calcul d'homogeneisation fournit une famille de criteres de resistance du m. F. M. Dependant du parametre d'heterogeneite r. Afin d'obtenir le critere definitif, nous pratiquons le calage d'un calcul numerique (charge limite d'une plaque en traction) sur les essais experimentaux correspondants. Les previsions numeriques sont ensuite comparees a des essais experimentaux de plaques en compression et de poutrelles en flexion
Styles APA, Harvard, Vancouver, ISO, etc.
49

Lin, Yi-Hung, et 林一弘. « Heterogenous Expression and Characterization of Different Domain of Jelly Fig Pathogenesis Related-4 Protein ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/k4e3w6.

Texte intégral
Résumé :
碩士
國立虎尾科技大學
生物科技研究所
102
Jelly fig (Ficus awkeotasang Makino) is one of the endemic plants in Taiwan. The water extract from jelly fig achenes forms jelly curd, a common summer beverage. Some pericarpial proteins extracted from the jelly curd have been identified as a pectin methylesterase (PME), chitinase, thaumatin-like protein (TLP), two PR proteins, pathogenesis related-1(PR-1) and pathogenesis related-4 (PR-4). PR-4 was classified into two groups, Class I contains extra chitin binding domain (CBD) in N-ternimus and Class II has no such domain. cDNA fragment coding for Jelly fig PR4 (FaPR4) has been cloned and its deduced protein has CBD in N-terminus and a vacuolar signal in C-terminus. Using designed primers and PCR technology obtained cDNA fragnmemt encoding FaPR4(noCt), a jelly fig PR-4 without vacuolar signal. FaPR4(noCt) and FaPR4(noCt/mutant) were successfully expressed in yeast Pichia pastoris. FaPR4(noCt/mutant) is a point-mutant of FaPR4(noCt). The purified r-FaPR4, r-FaPR4(noCt), r-FaPR4(noCt/mutant), r-FaPR4(NO) and r-FaPR4(NO/mutant) were subjected for RNase activity assay. The recombinant proteins eluted from different salt concentrate showed different RNase activity. Recombinantion r-FaPR, r-FaPR4(noCt)and r-FaPR4(noCt/mutant) exhibited antifungal activity toward Glomerella cingulata and Fusarium oxysporum. After heating at 100oC for 10 min, the recombination proteins remained some activity, and r-FaPR4 had the highest antifungal activity. In addition, recombination proteins could cause ROS response in Rhizoctonia solani hyphae, and r-FaPR4 has the most obvious effect on ROS response.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Fang, Wen-Chieh, et 方文杰. « Linear Discriminative Projections for Heterogeneous Domain Adaptation ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/07337212530334343890.

Texte intégral
Résumé :
博士
國立臺灣大學
資訊工程學研究所
102
It is often expensive to collect labeled data and we sometimes have large amounts of labeled data in a related domain. Without enough training data, some classifiers such as k-Nearest Neighbor (kNN) or Support Vector Machine (SVM) may fail to achieve good classification performance. In this thesis, we consider the problem of utilizing few labeled data samples in a target domain and the data samples in a source domain to improve data classification in the target domain. We assume that the source and target domains have different feature spaces. In addition, the two domains are assumed to share no explicit common features but have the same set of class labels. A key technique for leveraging the data from another domain is to find two mapping functions so that the source and target spaces can be projected on a common space. In this thesis, we present a simple and intuitive technique called linear discriminative projections to address the problem. First, we separate the source data of distinct classes by using a discriminative method such as Linear Discriminative Analysis (LDA). We then apply a regression technique to map each labeled target data instance as close as possible to the center of the source data group with the same class label. Finally, we again use a discriminative method to separate all the data of distinct classes. Experimental results on some benchmark datasets clearly demonstrate that our approach is effective for learning discriminative features for supervised classification with few training target data.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie