To see the other types of publications on this topic, follow the link: Bayes Modelling.

Dissertations / Theses on the topic 'Bayes Modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Bayes Modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dowman, Mike. "Colour Terms, Syntax and Bayes Modelling Acquisition and Evolution." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/558.

Full text
Abstract:
This thesis investigates language acquisition and evolution, using the methodologies of Bayesian inference and expression-induction modelling, making specific reference to colour term typology, and syntactic acquisition. In order to test Berlin and Kay's (1969) hypothesis that the typological patterns observed in basic colour term systems are produced by a process of cultural evolution under the influence of universal aspects of human neurophysiology, an expression-induction model was created. Ten artificial people were simulated, each of which was a computational agent. These people could learn colour term denotations by generalizing from examples using Bayesian inference, and the resulting denotations had the prototype properties characteristic of basic colour terms. Conversations between these people, in which they learned from one-another, were simulated over several generations, and the languages emerging at the end of each simulation were investigated. The proportion of colour terms of each type correlated closely with the equivalent frequencies found in the World Colour Survey, and most of the emergent languages could be placed on one of the evolutionary trajectories proposed by Kay and Maffi (1999). The simulation therefore demonstrates how typological patterns can emerge as a result of learning biases acting over a period of time. Further work applied the minimum description length form of Bayesian inference to modelling syntactic acquisition. The particular problem investigated was the acquisition of the dative alternation in English. This alternation presents a learnability paradox, because only some verbs alternate, but children typically do not receive reliable evidence indicating which verbs do not participate in the alternation (Pinker, 1989). The model presented in this thesis took note of the frequency with which each verb occurred in each subcategorization, and so was able to infer which subcategorizations were conspicuously absent, and so presumably ungrammatical. Crucially, it also incorporated a measure of grammar complexity, and a preference for simpler grammars, so that more general grammars would be learned unless there was sufficient evidence to support the incorporation of some restriction. The model was able to learn the correct subcategorizations for both alternating and non-alternating verbs, and could generalise to allow novel verbs to appear in both constructions. When less data was observed, it also overgeneralized the alternation, which is a behaviour characteristic of children when they are learning verb subcategorizations. These results demonstrate that the dative alternation is learnable, and therefore that universal grammar may not be necessary to account for syntactic acquisition. Overall, these results suggest that the forms of languages may be determined to a much greater extent by learning, and by cumulative historical changes, than would be expected if the universal grammar hypothesis were correct.
APA, Harvard, Vancouver, ISO, and other styles
2

Dowman, Mike. "Colour Terms, Syntax and Bayes Modelling Acquisition and Evolution." University of Sydney. Information Technologies, 2004. http://hdl.handle.net/2123/558.

Full text
Abstract:
This thesis investigates language acquisition and evolution, using the methodologies of Bayesian inference and expression-induction modelling, making specific reference to colour term typology, and syntactic acquisition. In order to test Berlin and Kay�s (1969) hypothesis that the typological patterns observed in basic colour term systems are produced by a process of cultural evolution under the influence of universal aspects of human neurophysiology, an expression-induction model was created. Ten artificial people were simulated, each of which was a computational agent. These people could learn colour term denotations by generalizing from examples using Bayesian inference, and the resulting denotations had the prototype properties characteristic of basic colour terms. Conversations between these people, in which they learned from one-another, were simulated over several generations, and the languages emerging at the end of each simulation were investigated. The proportion of colour terms of each type correlated closely with the equivalent frequencies found in the World Colour Survey, and most of the emergent languages could be placed on one of the evolutionary trajectories proposed by Kay and Maffi (1999). The simulation therefore demonstrates how typological patterns can emerge as a result of learning biases acting over a period of time. Further work applied the minimum description length form of Bayesian inference to modelling syntactic acquisition. The particular problem investigated was the acquisition of the dative alternation in English. This alternation presents a learnability paradox, because only some verbs alternate, but children typically do not receive reliable evidence indicating which verbs do not participate in the alternation (Pinker, 1989). The model presented in this thesis took note of the frequency with which each verb occurred in each subcategorization, and so was able to infer which subcategorizations were conspicuously absent, and so presumably ungrammatical. Crucially, it also incorporated a measure of grammar complexity, and a preference for simpler grammars, so that more general grammars would be learned unless there was sufficient evidence to support the incorporation of some restriction. The model was able to learn the correct subcategorizations for both alternating and non-alternating verbs, and could generalise to allow novel verbs to appear in both constructions. When less data was observed, it also overgeneralized the alternation, which is a behaviour characteristic of children when they are learning verb subcategorizations. These results demonstrate that the dative alternation is learnable, and therefore that universal grammar may not be necessary to account for syntactic acquisition. Overall, these results suggest that the forms of languages may be determined to a much greater extent by learning, and by cumulative historical changes, than would be expected if the universal grammar hypothesis were correct.
APA, Harvard, Vancouver, ISO, and other styles
3

Revie, Matthew. "Evaluation of bayes linear modelling to support reliability assessment during procurement." Thesis, University of Strathclyde, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487866.

Full text
Abstract:
A major task facing a number of different procuring agencies is the assessment of the developmental reliability of a product as contractors incrementally provide information. These agencies would like to use structured quantitative methodologies, as opposed to unstructured assessment currently adopted, to evaluate reliability throughout the life of a development programme as additional information becomes available. Due to resource constraints, any developed methodology must be cost and time efficient. This research attempts to develop a methodology for customers that is capable of assessing the information presented by a contractor in a reliability case to support decision making throughout the procurement process.
APA, Harvard, Vancouver, ISO, and other styles
4

Starobinskaya, Irina. "Structural modelling of operational risk in financial institutions : application of Bayesian networks and balanced scorecards to IT infrastructure risk modelling /." Berlin : Pro Business, 2008. http://d-nb.info/991725328/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Steinberg, Daniel. "An Unsupervised Approach to Modelling Visual Data." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9415.

Full text
Abstract:
For very large visual datasets, producing expert ground-truth data for training supervised algorithms can represent a substantial human effort. In these situations there is scope for the use of unsupervised approaches that can model collections of images and automatically summarise their content. The primary motivation for this thesis comes from the problem of labelling large visual datasets of the seafloor obtained by an Autonomous Underwater Vehicle (AUV) for ecological analysis. It is expensive to label this data, as taxonomical experts for the specific region are required, whereas automatically generated summaries can be used to focus the efforts of experts, and inform decisions on additional sampling. The contributions in this thesis arise from modelling this visual data in entirely unsupervised ways to obtain comprehensive visual summaries. Firstly, popular unsupervised image feature learning approaches are adapted to work with large datasets and unsupervised clustering algorithms. Next, using Bayesian models the performance of rudimentary scene clustering is boosted by sharing clusters between multiple related datasets, such as regular photo albums or AUV surveys. These Bayesian scene clustering models are extended to simultaneously cluster sub-image segments to form unsupervised notions of “objects” within scenes. The frequency distribution of these objects within scenes is used as the scene descriptor for simultaneous scene clustering. Finally, this simultaneous clustering model is extended to make use of whole image descriptors, which encode rudimentary spatial information, as well as object frequency distributions to describe scenes. This is achieved by unifying the previously presented Bayesian clustering models, and in so doing rectifies some of their weaknesses and limitations. Hence, the final contribution of this thesis is a practical unsupervised algorithm for modelling images from the super-pixel to album levels, and is applicable to large datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Jones, Matthew James. "Bayes linear strategies for the approximation of complex numerical calculations arising in sequential design and physical modelling problems." Thesis, Durham University, 2017. http://etheses.dur.ac.uk/12529/.

Full text
Abstract:
In a range of different scientific fields, deterministic calculations for which there is no analytic solution must be approximated numerically. The use of numerical approximations is necessary, but introduces a discrepancy between the true solution and the numerical solution that is generated. Bayesian methods are used to account for uncertainties introduced through numerical approximation in a variety of situations. To solve problems in Bayesian sequential experimental design, a sequence of complex integration and optimisation steps must be performed; for most problems, these calculations have no closed-form solution. An approximating framework is developed which tracks numerical uncertainty about the result of each calculation through each step of the design procedure. This framework is illustrated through application to a simple linear model, and to a more complex problem in atmospheric dispersion modelling. The approximating framework is also adapted to allow for the situation where beliefs about a model may change at certain points in the future. Where ordinary or partial differential equation (ODE or PDE) systems are used to represent a real-world system, it is rare that these can be solved directly. A wide variety of different approximation strategies have been developed for such problems; the approximate solution that is generated will differ from the true solution in some unknown way. A Bayesian framework which accounts for the uncertainty induced through numerical approximation is developed, and Bayes linear graphical analysis is used to efficiently update beliefs about model components using observations on the real system. In the ODE case, the framework is illustrated through application to a Lagrangian mechanical model for the interaction between a set of ringing bells and the tower in which they are hung; in the PDE case, the framework is illustrated through application to the heat equation in one spatial dimension.
APA, Harvard, Vancouver, ISO, and other styles
7

Salge, Christoph. "Information theoretic models of social interaction." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/13887.

Full text
Abstract:
This dissertation demonstrates, in a non-semantic information-theoretic framework, how the principles of 'maximisation of relevant information' and 'information parsimony' can guide the adaptation of an agent towards agent-agent interaction. Central to this thesis is the concept of digested information; I argue that an agent is intrinsically motivated to a.) process the relevant information in its environment and b.) display this information in its own actions. From the perspective of similar agents, who require similar information, this differentiates other agents from the rest of the environment, by virtue of the information they provide. This provides an informational incentive to observe other agents and integrate their information into one's own decision making process. This process is formalized in the framework of information theory, which allows for a quantitative treatment of the resulting effects, specifically how the digested information of an agent is influenced by several factors, such as the agent's performance and the integrated information of other agents. Two specific phenomena based on information maximisation arise in this thesis. One is flocking behaviour similar to boids that results when agents are searching for a location in a girdworld and integrated the information in other agent's actions via Bayes' Theorem. The other is an effect where integrating information from too many agents becomes detrimental to an agent's performance, for which several explanations are provided.
APA, Harvard, Vancouver, ISO, and other styles
8

Baker, Peter John. "Applied Bayesian modelling in genetics." Thesis, Queensland University of Technology, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jaradat, Shatha. "OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177535.

Full text
Abstract:
Providing high quality of topics inference in today's large and dynamic corpora, such as Twitter, is a challenging task. This is especially challenging taking into account that the content in this environment contains short texts and many abbreviations. This project proposes an improvement of a popular online topics modelling algorithm for Latent Dirichlet Allocation (LDA), by incorporating supervision to make it suitable for Twitter context. This improvement is motivated by the need for a single algorithm that achieves both objectives: analyzing huge amounts of documents, including new documents arriving in a stream, and, at the same time, achieving high quality of topics’ detection in special case environments, such as Twitter. The proposed algorithm is a combination of an online algorithm for LDA and a supervised variant of LDA - labeled LDA. The performance and quality of the proposed algorithm is compared with these two algorithms. The results demonstrate that the proposed algorithm has shown better performance and quality when compared to the supervised variant of LDA, and it achieved better results in terms of quality in comparison to the online algorithm. These improvements make our algorithm an attractive option when applied to dynamic environments, like Twitter. An environment for analyzing and labelling data is designed to prepare the dataset before executing the experiments. Possible application areas for the proposed algorithm are tweets recommendation and trends detection.
Tillhandahålla högkvalitativa ämnen slutsats i dagens stora och dynamiska korpusar, såsom Twitter, är en utmanande uppgift. Detta är särskilt utmanande med tanke på att innehållet i den här miljön innehåller korta texter och många förkortningar. Projektet föreslår en förbättring med en populär online ämnen modellering algoritm för Latent Dirichlet Tilldelning (LDA), genom att införliva tillsyn för att göra den lämplig för Twitter sammanhang. Denna förbättring motiveras av behovet av en enda algoritm som uppnår båda målen: analysera stora mängder av dokument, inklusive nya dokument som anländer i en bäck, och samtidigt uppnå hög kvalitet på ämnen "upptäckt i speciella fall miljöer, till exempel som Twitter. Den föreslagna algoritmen är en kombination av en online-algoritm för LDA och en övervakad variant av LDA - Labeled LDA. Prestanda och kvalitet av den föreslagna algoritmen jämförs med dessa två algoritmer. Resultaten visar att den föreslagna algoritmen har visat bättre prestanda och kvalitet i jämförelse med den övervakade varianten av LDA, och det uppnådde bättre resultat i fråga om kvalitet i jämförelse med den online-algoritmen. Dessa förbättringar gör vår algoritm till ett attraktivt alternativ när de tillämpas på dynamiska miljöer, som Twitter. En miljö för att analysera och märkning uppgifter är utformad för att förbereda dataset innan du utför experimenten. Möjliga användningsområden för den föreslagna algoritmen är tweets rekommendation och trender upptäckt.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, David I.-Chung. "Speaker diarization : "who spoke when"." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/59624/1/David_Wang_Thesis.pdf.

Full text
Abstract:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
APA, Harvard, Vancouver, ISO, and other styles
11

Crawford, B. "Modelling the capillary extrusion of silicone rubber bases." Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gensler, Sonja Skiera Bernd. "Heterogenität in der Präferenzanalyse : ein Vergleich von hierarchischen Bayes-Modellen und Finite-Mixture-Modellen /." Wiesbaden : Dt. Univ.-Verl, 2003. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=010431500&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Tian Siong. "Modelling and kinetics estimation in gibbsite precipitation from caustic aluminate solutions." Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/1103.

Full text
Abstract:
Precipitation of gibbsite from supersaturated caustic aluminate solutions has been investigated extensively due to its central role in the commercial Bayer plant, for extracting the alumina compound from bauxite. The primary focus of Bayer process simulation and optimisation is to help maximise the product recovery and the production of a product crystal size distribution (CSD) that meets the product specification and improves downstream process performance. The product CSD is essentially determined by the nucleation, growth and agglomeration kinetics, which occur simultaneously during the precipitation process. These processes are still poorly understood, owing to the high complexity of their mechanisms and of the structure of the caustic aluminate solutions. This research focuses on the modelling and kinetics estimation aspects of simulating gibbsite precipitation. Population balance theory was used to derive different laboratory gibbsite precipitator models, and the discretised population balance models of Hounslow, Ryall & Marshall (1988) and Litster, Smit & Hounslow (1995) were employed to solve the resulting partial integro-differential equations. Gibbsite kinetics rates were determined from literature correlation models and also estimated from the CSD data using the, so-called, differential method. Modelling of nonstationary gibbsite precipitation systems showed that error propagated with the precipitation time scale. The main contribution to the observed error was found to be from the uncertainties in the kinetic parameter estimates, which are estimated from experimental data and used in the simulation. This result showed that care is required when simulating the CSD of non-stationary precipitators over longer time scales, and methods that produce precise estimates of the kinetics rates from the experimental data need to be used.Kinetics estimation study from repeated batch gibbsite precipitation data showed that the uncertainty in the experimental data coupled with the error incurred from the kinetic parameter estimation procedure used, resulted in large uncertainties in the kinetics estimates. The influences of the experimental design and the kinetics estimation technique on the accuracy and precision of estimates of the nucleation, growth and agglomeration kinetics for the gibbsite precipitation system were investigated. It was found that the operating conditions have a greater impact on the uncertainties in the estimated kinetics than does the precipitator configuration. The kinetics estimates from the integral method, i.e. non-linear parameter optimisation method, describe the gibbsite precipitation data better than those obtained by the differential method. However, both kinetics estimation techniques incurred significant uncertainties in the kinetics estimates, particularly toward the end of the precipitation runs where the kinetics rates are slow. The uncertainties in the kinetics estimates are strongly correlated to the magnitude of kinetics values and are dependent on the change in total crystal numbers and total crystal volume. Batch gibbsite precipitation data from an inhomogeneously-mixed precipitator were compared to a well-mixed precipitation system operated under the same operating conditions, i.e. supersaturation, seed charge, seed type, mean shear rate and temperature.It was found that the gibbsite agglomeration kinetic estimates were significantly different, and hence, the product CSD, but the gibbsite growth rates were similar. It was also found that a compartmental model approach cannot fully account for the differences in suspension hydrodynamics, and resulted in unsatisfactorily CSD predictions of the inhomogeneously-mixed precipitator. This is attributed to the coupled effects of local energy dissipation rate and solids phase mixing on agglomeration process.
APA, Harvard, Vancouver, ISO, and other styles
14

González, González Larry Javier. "Modelling dynamics of RDF graphs with formal concept analysis." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168144.

Full text
Abstract:
Magíster en Ciencias, Mención Computación
La Web Semántica es una red de datos organizados de tal manera que permite su manipulación directa tanto por humanos como por computadoras. RDF es el framework recomendado por W3C para representar información en la Web Semántica. RDF usa un modelo de datos basado en grafos que no requiere ningún esquema fijo, provocando que los grafos RDF sean fáciles de extender e integrar, pero también difíciles de consultar, entender, explorar, resumir, etc. En esta tesis, inspirados en formal concept analysis (un subcampo de las matemáticas aplicadas, basados en la formalización de conceptos y jerarquı́as conceptuales, llamadas lattices) proponemos un data-driven schema para grandes y heterogéneos grafos RDF La idea principal es que si podemos definir un formal context a partir de un grafo RDF, entonces podemos extraer sus formal concepts y computar una lattice con ellos, lo que resulta en nuestra propuesta de esquema jerárquico para grafos RDF. Luego, proponemos un álgebra sobre tales lattices, que permite (1) calcular deltas entre dos lattices (por ejemplo, para resumir los cambios de una versión de un grafo a otro), y (2) sumar un delta a un lattice (por ejemplo, para proyectar cambios futuros). Mientras esta estructura (y su álgebra asociada) puede tener varias aplicaciones, nos centramos en el caso de uso de modelar y predecir el comportamiento dinámico de los grafos RDF. Evaluamos nuestros métodos al analizar cómo Wikidata ha cambiado durante 11 semanas. Primero extraemos los conjuntos de propiedades asociadas a entidades individuales de una manera escalable usando el framework MapReduce. Estos conjuntos de propiedades (también conocidos como characteristic sets) son anotados con sus entidades asociadas, y posteriormente, con su cardinalidad. En segundo lugar, proponemos un algoritmo para construir la lattice sobre los characteristic sets basados en la relación de subconjunto. Evaluamos la eficiencia y la escalabilidad de ambos procedimientos. Finalmente, usamos los métodos algebraicos para predecir cómo el esquema jerárquico de Wikidata evolucionaría. Contrastamos nuestros resultados con un modelo de regresión lineal como referencia. Nuestra propuesta supera al modelo lineal por un gran margen, llegando a obtener un root mean square error 12 veces más pequeño que el modelo de referencia. Concluimos que, basados en formal concept analysis, podemos definir y generar un esquema jerárquico a partir de un grafo RDF y que podemos usar esos esquemas para predecir cómo evolucionarán, en un alto nivel, estos grafos RDF en el tiempo.
APA, Harvard, Vancouver, ISO, and other styles
15

Abidi, Amna. "Imperfect RDF Databases : From Modelling to Querying." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0008/document.

Full text
Abstract:
L’intérêt sans cesse croissant des données RDF disponibles sur le Web a conduit à l’émergence de multiple et importants efforts de recherche pour enrichir le formalisme traditionnel des données RDF à des fins d’exploitation et d’analyse. Le travail de cette thèse s’inscrit dans la continuation de ces efforts en abordant la problématique de la gestion des données RDF en présence d’imperfections (manque de confiance/validité, incertitude, etc.). Les contributions de la thèse sont comme suit: (1) Nous avons proposé d’appliquer l’opérateur skyline sur les données RDF pondérées par des mesures de confiance (Trust-RDF) dans le but d’extraire les ressources les plus confiantes selon des critères définis par l’utilisateur. (2) Nous avons discuté via des méthodes statistiques l’impact des mesures de confiance sur le Trust-skyline.(3) Nous avons intégré à la structure des données RDF un quatrième élément, exprimant une mesure de possibilité. Pour gérer cette mesure de possibilité, un cadre langagier appropriée est étudié, à savoir Pi-SPARQL, qui étend le langage SPARQL aux requêtes permettant de traiter des distributions de possibilités. (4) Nous avons étudié une variante d’opérateur skyline pour extraire les ressources RDF possibilistes qui ne sont éventuellement dominées par aucune autre ressource dans le sens de l’optimalité de Pareto
The ever-increasing interest of RDF data on the Web has led to several and important research efforts to enrich traditional RDF data formalism for the exploitation and analysis purpose. The work of this thesis is a part of the continuation of those efforts by addressing the issue of RDF data management in presence of imperfection (untruthfulness, uncertainty, etc.). The main contributions of this dissertation are as follows. (1) We tackled the trusted RDF data model. Hence, we proposed to extend the skyline queries over trust RDF data, which consists in extracting the most interesting trusted resources according to user-defined criteria. (2) We studied via statistical methods the impact of the trust measure on the Trust-skyline set.(3) We integrated in the structure of RDF data (i.e., subject-property-object triple) a fourth element expressing a possibility measure to reflect the user opinion about the truth of a statement.To deal with possibility requirements, appropriate framework related to language is introduced, namely Pi-SPARQL, that extends SPARQL to be possibility-aware query language.Finally, we studied a new skyline operator variant to extract possibilistic RDF resources that are possibly dominated by no other resources in the sense of Pareto optimality
APA, Harvard, Vancouver, ISO, and other styles
16

Naumann, Felix [Verfasser], and Markus [Akademischer Betreuer] Bühner. "Nonparametrische Bayes-Inferenz in mehrdimensionalen Item Response Modellen / Felix Naumann ; Betreuer: Markus Bühner." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/1174142766/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Chakroun, Chedlia. "Contribution à la définition d'une méthode de conception de bases de données à base ontologique." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00904117.

Full text
Abstract:
Récemment, les ontologies ont été largement adoptées par différentes entreprises dans divers domaines. Elles sontdevenues des composantes centrales dans bon nombre d'applications. Ces modèles conceptualisent l'univers du discours auxmoyens de concepts primitifs et parfois redondants (calculés à partir de concepts primitifs). Au début, la relation entreontologies et base de données a été faiblement couplée. Avec l'explosion des données sémantiques, des solutions depersistance assurant une haute performance des applications ont été proposées. En conséquence, un nouveau type de base dedonnées, appelée base de données à base ontologique (BDBO) a vu le jour. Plusieurs types de BDBO ont été proposés, ilsutilisent différents SGBD. Chaque BDBO possède sa propre architecture et ses modèles de stockage dédiés à la persistancedes ontologies et de ses instances. A ce stade, la relation entre les bases de données et les ontologies devient fortementcouplée. En conséquence, plusieurs études de recherche ont été proposées sur la phase de conception physique des BDBO.Les phases conceptuelle et logique n'ont été que partiellement traitées. Afin de garantir un succès similaire au celui connupar les bases de données relationnelles, les BDBO doivent être accompagnées par des méthodologies de conception et desoutils traitant les différentes étapes du cycle de vie d'une base de données. Une telle méthodologie devrait identifier laredondance intégrée dans l'ontologie. Nos travaux proposent une méthodologie de conception dédiée aux bases de données àbase ontologique incluant les principales phases du cycle de vie du développement d'une base de données : conceptuel,logique, physique ainsi que la phase de déploiement. La phase de conception logique est réalisée grâce à l'incorporation desdépendances entre les concepts ontologiques. Ces dépendances sont semblables au principe des dépendances fonctionnellesdéfinies pour les bases de données relationnelles. En raison de la diversité des architectures des BDBO et la variété desmodèles de stockage utilisés pour stocker et gérer les données ontologiques, nous proposons une approche de déploiement àla carte. Pour valider notre proposition, une implémentation de notre approche dans un environnement de BDBO sousOntoDB est proposée. Enfin, dans le but d'accompagner l'utilisateur pendant le processus de conception, un outil d'aide à laconception des bases de données à partir d'une ontologie conceptuelle est présenté
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Tian Siong. "Modelling and kinetics estimation in gibbsite precipitation from caustic aluminate solutions." Curtin University of Technology, School of Applied Chemistry, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=13672.

Full text
Abstract:
Precipitation of gibbsite from supersaturated caustic aluminate solutions has been investigated extensively due to its central role in the commercial Bayer plant, for extracting the alumina compound from bauxite. The primary focus of Bayer process simulation and optimisation is to help maximise the product recovery and the production of a product crystal size distribution (CSD) that meets the product specification and improves downstream process performance. The product CSD is essentially determined by the nucleation, growth and agglomeration kinetics, which occur simultaneously during the precipitation process. These processes are still poorly understood, owing to the high complexity of their mechanisms and of the structure of the caustic aluminate solutions. This research focuses on the modelling and kinetics estimation aspects of simulating gibbsite precipitation. Population balance theory was used to derive different laboratory gibbsite precipitator models, and the discretised population balance models of Hounslow, Ryall & Marshall (1988) and Litster, Smit & Hounslow (1995) were employed to solve the resulting partial integro-differential equations. Gibbsite kinetics rates were determined from literature correlation models and also estimated from the CSD data using the, so-called, differential method. Modelling of nonstationary gibbsite precipitation systems showed that error propagated with the precipitation time scale. The main contribution to the observed error was found to be from the uncertainties in the kinetic parameter estimates, which are estimated from experimental data and used in the simulation. This result showed that care is required when simulating the CSD of non-stationary precipitators over longer time scales, and methods that produce precise estimates of the kinetics rates from the experimental data need to be used.
Kinetics estimation study from repeated batch gibbsite precipitation data showed that the uncertainty in the experimental data coupled with the error incurred from the kinetic parameter estimation procedure used, resulted in large uncertainties in the kinetics estimates. The influences of the experimental design and the kinetics estimation technique on the accuracy and precision of estimates of the nucleation, growth and agglomeration kinetics for the gibbsite precipitation system were investigated. It was found that the operating conditions have a greater impact on the uncertainties in the estimated kinetics than does the precipitator configuration. The kinetics estimates from the integral method, i.e. non-linear parameter optimisation method, describe the gibbsite precipitation data better than those obtained by the differential method. However, both kinetics estimation techniques incurred significant uncertainties in the kinetics estimates, particularly toward the end of the precipitation runs where the kinetics rates are slow. The uncertainties in the kinetics estimates are strongly correlated to the magnitude of kinetics values and are dependent on the change in total crystal numbers and total crystal volume. Batch gibbsite precipitation data from an inhomogeneously-mixed precipitator were compared to a well-mixed precipitation system operated under the same operating conditions, i.e. supersaturation, seed charge, seed type, mean shear rate and temperature.
It was found that the gibbsite agglomeration kinetic estimates were significantly different, and hence, the product CSD, but the gibbsite growth rates were similar. It was also found that a compartmental model approach cannot fully account for the differences in suspension hydrodynamics, and resulted in unsatisfactorily CSD predictions of the inhomogeneously-mixed precipitator. This is attributed to the coupled effects of local energy dissipation rate and solids phase mixing on agglomeration process.
APA, Harvard, Vancouver, ISO, and other styles
19

Schaberreiter, T. (Thomas). "A Bayesian network based on-line risk prediction framework for interdependent critical infrastructures." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526202129.

Full text
Abstract:
Abstract Critical Infrastructures (CIs) are an integral part of our society and economy. Services like electricity supply or telecommunication services are expected to be available at all times and a service failure may have catastrophic consequences for society or economy. Current CI protection strategies are from a time when CIs or CI sectors could be operated more or less self-sufficient and interconnections among CIs or CI sectors, which may lead to cascading service failures to other CIs or CI sectors, where not as omnipresent as today. In this PhD thesis, a cross-sector CI model for on-line risk monitoring of CI services, called CI security model, is presented. The model allows to monitor a CI service risk and to notify services that depend on it of possible risks in order to reduce and mitigate possible cascading failures. The model estimates CI service risk by observing the CI service state as measured by base measurements (e.g. sensor or software states) within the CI service components and by observing the experienced service risk of CI services it depends on (CI service dependencies). CI service risk is estimated in a probabilistic way using a Bayesian network based approach. Furthermore, the model allows CI service risk prediction in the short-term, mid-term and long-term future, given a current CI service risk and it allows to model interdependencies (a CI service risk that loops back to the originating service via dependencies), a special case that is difficult to model using Bayesian networks. The representation of a CI as a CI security model requires analysis. In this PhD thesis, a CI analysis method based on the PROTOS-MATINE dependency analysis methodology is presented in order to analyse CIs and represent them as CI services, CI service dependencies and base measurements. Additional research presented in this PhD thesis is related to a study of assurance indicators able to perform an on-line evaluation of the correctness of risk estimates within a CI service, as well as for risk estimates received from dependencies. A tool that supports all steps of establishing a CI security model was implemented during this PhD research. The research on the CI security model and the assurance indicators was validated based on a case study and the initial results suggest its applicability to CI environments
Tiivistelmä Tässä väitöskirjassa esitellään läpileikkausmalli kriittisten infrastruktuurien jatkuvaan käytön riskimallinnukseen. Tämän mallin avulla voidaan tiedottaa toisistaan riippuvaisia palveluita mahdollisista vaaroista, ja siten pysäyttää tai hidastaa toisiinsa vaikuttavat ja kumuloituvat vikaantumiset. Malli analysoi kriittisen infrastruktuurin palveluriskiä tutkimalla kriittisen infrastruktuuripalvelun tilan, joka on mitattu perusmittauksella (esimerkiksi anturi- tai ohjelmistotiloina) kriittisen infrastruktuurin palvelukomponenttien välillä ja tarkkailemalla koetun kriittisen infrastruktuurin palveluriskiä, joista palvelut riippuvat (kriittisen infrastruktuurin palveluriippuvuudet). Kriittisen infrastruktuurin palveluriski arvioidaan todennäköisyyden avulla käyttämällä Bayes-verkkoja. Lisäksi malli mahdollistaa tulevien riskien ennustamisen lyhyellä, keskipitkällä ja pitkällä aikavälillä, ja mahdollistaa niiden keskinäisten riippuvuuksien mallintamisen, joka on yleensä vaikea esittää Bayes-verkoissa. Kriittisen infrastruktuurin esittäminen kriittisen infrastruktuurin tietoturvamallina edellyttää analyysiä. Tässä väitöskirjassa esitellään kriittisen infrastruktuurin analyysimenetelmä, joka perustuu PROTOS-MATINE -riippuvuusanalyysimetodologiaan. Kriittiset infrastruktuurit esitetään kriittisen infrastruktuurin palveluina, palvelujen keskinäisinä riippuvuuksina ja perusmittauksina. Lisäksi tutkitaan varmuusindikaattoreita, joilla voidaan tutkia suoraan toiminnassa olevan kriittisen infrastruktuuripalvelun riskianalyysin oikeellisuutta, kuin myös riskiarvioita riippuvuuksista. Tutkimuksessa laadittiin työkalu, joka tukee kriittisen infrastruktuurin tietoturvamallin toteuttamisen kaikkia vaiheita. Kriittisen infrastruktuurin tietoturvamalli ja varmuusindikaattorien oikeellisuus vahvistettiin konseptitutkimuksella, ja alustavat tulokset osoittavat menetelmän toimivuuden
Kurzfassung In dieser Doktorarbeit wird ein Sektorübergreifendes Modell für die kontinuierliche Risikoabschätzung von kritische Infrastrukturen im laufenden Betrieb vorgestellt. Das Modell erlaubt es, Dienstleistungen, die in Abhängigkeit einer anderen Dienstleistung stehen, über mögliche Gefahren zu informieren und damit die Gefahr des Übergriffs von Risiken in andere Teile zu stoppen oder zu minimieren. Mit dem Modell können Gefahren in einer Dienstleistung anhand der Überwachung von kontinuierlichen Messungen (zum Beispiel Sensoren oder Softwarestatus) sowie der Überwachung von Gefahren in Dienstleistungen, die eine Abhängigkeit darstellen, analysiert werden. Die Abschätzung von Gefahren erfolgt probabilistisch mittels eines Bayessches Netzwerks. Zusätzlich erlaubt dieses Modell die Voraussage von zukünftigen Risiken in der kurzfristigen, mittelfristigen und langfristigen Zukunft und es erlaubt die Modellierung von gegenseitigen Abhängigkeiten, die im Allgemeinen schwer mit Bayesschen Netzwerken darzustellen sind. Um eine kritische Infrastruktur als ein solches Modell darzustellen, muss eine Analyse der kritischen Infrastruktur durchgeführt werden. In dieser Doktorarbeit wird diese Analyse durch die PROTOS-MATINE Methode zur Analyse von Abhängigkeiten unterstützt. Zusätzlich zu dem vorgestellten Modell wird in dieser Doktorarbeit eine Studie über Indikatoren, die das Vertrauen in die Genauigkeit einer Risikoabschätzung evaluieren können, vorgestellt. Die Studie beschäftigt sich sowohl mit der Evaluierung von Risikoabschätzungen innerhalb von Dienstleistungen als auch mit der Evaluierung von Risikoabschätzungen, die von Dienstleistungen erhalten wurden, die eine Abhängigkeiten darstellen. Eine Software, die alle Aspekte der Erstellung des vorgestellten Modells unterstützt, wurde entwickelt. Sowohl das präsentierte Modell zur Abschätzung von Risiken in kritischen Infrastrukturen als auch die Indikatoren zur Uberprüfung der Risikoabschätzungen wurden anhand einer Machbarkeitsstudie validiert. Erste Ergebnisse suggerieren die Anwendbarkeit dieser Konzepte auf kritische Infrastrukturen
APA, Harvard, Vancouver, ISO, and other styles
20

Morgan, Robert E. "Modelling the explanatory bases of export intention : an investigation of non-exporting firms within the United Kingdom." Thesis, Cardiff University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Pérez-Guevara, Martín. "Bases neuronales de binding dans des représentations symboliques : exploration expérimentale et de modélisation." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB082/document.

Full text
Abstract:
Le travail présenté dans cette thèse fait partie d’un programme de recherche qui vise à comprendre comment le cerveau traite et représente les structures symboliques dans des domaines comme le langage ou les mathématiques. L’existence de structures composées de sous-éléments, tel que les morphèmes, les mots ou les phrases est très fortement suggérée par les analyses linguistiques et les données expérimentales de la psycholinguistique. En revanche, l’implémentation neuronale des opérations et des représentations qui permettent la nature combinatoire du langage reste encore essentiellement inconnue. Certaines opérations de composition élémentaires permettant une représentation interne stable des objets dans le cortex sensoriel, tel que la reconnaissance hiérarchique des formes, sont aujourd’hui mieux comprises [5]. En revanche, les modèles concernant les opérations de liaisons(binding) nécessaires à la construction de structures symboliques complexes et possiblement hiérarchiques, pour lesquelles des manipulations précises des composants doit être possible, sont encore peu testés de façon expérimentale et incapables de prédire les signaux en neuroimagerie. Combler le fossé entre les données de neuroimagerie expérimentale et les modèles proposés pour résoudre le problème de binding est une étape cruciale pour mieux comprendre les processus de traitements et de représentation des structures symboliques. Au regard de ce problème, l’objectif de ce travail était d’identifier et de tester expérimentalement les théories basées sur des réseaux neuronaux, capables de traiter des structures symboliques pour lesquelles nous avons pu établir des prédictions testables, contre des mesures existantes de neuroimagerie fMRI et ECoG dérivées de tâches de traitement du langage. Nous avons identifié deux approches de modélisation pertinentes. La première approche s’inscrit dans le contexte des architectures symboliques vectorielles (VSA), qui propose une modélisation mathématique précise des opérations nécessaires pour représenter les structures dans des réseaux neuronaux artificiels. C’est le formalisme de Paul Smolensky[10], utilisant des produit tensoriel (TPR) qui englobe la plupart des architectures VSA précédemment proposées comme, par exemple, les modèles d’Activation synchrones[9], les représentations réduites holographique[8], et les mémoires auto-associatives récursives[1]. La seconde approche que nous avons identifiée est celle du "Neural Blackboard Architecture" (NBA), développée par Marc De Kamps et Van der Velde[11]. Elle se démarque des autres en proposant une implémentation des mécanismes associatifs à travers des circuits formés par des assemblages de réseaux neuronaux. L’architecture du Blackboard repose sur des changements de connectivité transitoires des circuits d’assemblages neuronaux, de sorte que le potentiel de l’activité neurale permise par les mécanismes de mémoire de travail après un processus de liaison, représente implicitement les structures symboliques. Dans la première partie de cette thèse, nous détaillons la théorie derrière chacun de ces modèles et les comparons, du point de vue du problème de binding. Les deux modèles sont capables de répondre à la plupart des défis théoriques posés par la modélisation neuronale des structures symboliques, notamment ceux présentées par Jackendo[3]. Néanmoins, ces deux classes de modèles sont très différentes. Le TPR de Smolenky s’appuie principalement sur des considérations spatiales statiques d’unités neurales artificielles, avec des représentations explicites complètement distribuées et spatialement stables mises en œuvre par des vecteurs. La NBA en revanche, considère les dynamiques temporelles de décharge de neurones artificiels, avec des représentations spatialement instables implémentées par des assemblages neuronaux. (...)
The aim of this thesis is to understand how the brain computes and represents symbolic structures, such like those encountered in language or mathematics. The existence of parts in structures like morphemes, words and phrases has been established through decades of linguistic analysis and psycholinguistic experiments. Nonetheless the neural implementation of the operations that support the extreme combinatorial nature of language remains unsettled. Some basic composition operations that allow the stable internal representation of sensory objects in the sensory cortex, like hierarchical pattern recognition, receptive fields, pooling and normalization, have started to be understood[5]. But models of the binding operations required for construction of complex, possibly hierarchical, symbolic structures on which precise manipulation of its components is a requisite, lack empirical testing and are still unable to predict neuroimaging signals. In this sense, bridging the gap between experimental neuroimaging evidence and the available modelling solutions to the binding problem is a crucial step for the advancement of our understanding of the brain computation and representation of symbolic structures. From the recognition of this problem, the goal of this PhD became the identification and experimental test of the theories, based on neural networks, capable of dealing with symbolic structures, for which we could establish testable predictions against existing fMRI and ECoG neuroimaging measurements derived from language processing tasks. We identified two powerful but very different modelling approaches to the problem. The first is in the context of the tradition of Vectorial Symbolic Architectures (VSA) that bring precise mathematical modelling to the operations required to represent structures in the neural units of artificial neural networks and manipulate them. This is Smolensky’s formalism with tensor product representations (TPR)[10], which he demonstrates can encompass most of the previous work in VSA, like Synchronous Firing[9], Holographic Reduced Representations[8] and Recursive Auto-Associative Memories[1]. The second, is the Neural Blackboard Architecture (NBA) developed by Marc De Kamps and Van der Velde[11], that importantly differentiates itself by proposing an implementation of binding by process in circuits formed by neural assemblies of spiking neural networks. Instead of solving binding by assuming precise and particular algebraic operations on vectors, the NBA proposes the establishment of transient connectivity changes in a circuit structure of neural assemblies, such that the potential _ow of neural activity allowed by working memory mechanisms after a binding process takes place, implicitly represents symbolic structures. The first part of the thesis develops in more detail the theory behind each of these models and their relationship from the common perspective of solving the binding problem. Both models are capable of addressing most of the theoretical challenges posed currently for the neural modelling of symbolic structures, including those presented by Jackendo_[3]. Nonetheless they are very different, Smolenky’s TPR relies mostly on spatial static considerations of artificial neural units with explicit completely distributed and spatially stable representations implemented through vectors, while the NBA relies on temporal dynamic considerations of biologically based spiking neural units with implicit semi-local and spatially unstable representations implemented through neural assemblies. For the second part of the thesis, we identified the superposition principle, which consists on the addition of the neural activations of each of the sub-parts of a symbolic structure, as one of the most crucial assumptions of Smolensky’s TPR. (...)
APA, Harvard, Vancouver, ISO, and other styles
22

Ramraj, Anitha. "Computational modelling of intermolecular interactions in bio, organic and nano molecules." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/computational-modelling-of-intermolecular-interactions-in-bio-organic-and-nano-molecules(7a41f3cd-1847-4ccf-8853-5fd8be2a2c15).html.

Full text
Abstract:
We have investigated the noncovalent interactions in carbohydrate-aromatic interactions which are pivotal to the recognition of carbohydrates in proteins. We have employed quantum mechanical methods to study carbohydrate-aromatic complexes. Due to the importance of dispersion contribution to the interaction energy, we mainly use density functional theory augmented with an empirical correction for the dispersion interactions (DFT-D). We have validated this method with a limited number of high level ab initio calculations. We have also analysed the vibrational and NMR chemical shift characteristics using the DFT-D method. We have mainly studied the complexes involving β-glucose with 3-methylindole and p-hydroxytoluene, which are analogues of tryptophan and tyrosine, respectively. We find that the contribution for interaction energy mainly comes from CH/π and OH/π interactions. We find that the interaction energy of complexes involving CH/π and OH/π interactions is reflected in the associated blue and red shifts of vibrational spectrum. We also find that the interactions involving 3-methylindole are somewhat greater than those for p-hydroxytoluene. The C-H blueshifts are also in parallel with the predicted NMR proton shift. We have also tested different density functionals including both standard density functionals and newly developed M0x functionals and MP2 method for studying carbohydrate-aromatic complexes. The DFT-D method and M06 functionals of the M0x family are found to perform better, while B3LYP and BLYP functionals perform poorly. We find that the inclusion of a dispersion term to BLYP is found to perform better. The dispersion energy dominates over the interaction energy of carbohydrate-aromatic complexes. From the DFT-D calculations, we found that the complexes would be unstable without the contribution from dispersive energy. We have also studied the importance of noncovalent interactions in functionalization of nanotubes by nucleic acid bases and aromatic amino acids by using semi-empirical methods with dispersion term such asPM3-D and PM3-D*. We find that the both semi-empirical schemes give reasonable interaction energies with respect to DFT-D interaction energies. We have also used PM3-D method to study the adsorption of organic pollutants on graphene sheet and on nanotubes. We found that the semi-empirical schemes, which are faster and cheaper, are suitable to study these larger molecules involving noncovalent interactions and can be used as an alternative to DFT-D method. We have also studied the importance of dispersion interaction and the effect of steric hindrance in aggregation of functionalized anthracenes and pentacenes. We have also employed molecular dynamics simulation methods to study the aggregation of anthracene molecules in toluene solution.
APA, Harvard, Vancouver, ISO, and other styles
23

Tchouanguem, Djuedja Justine Flore. "Information modelling for the development of sustainable construction (MINDOC)." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0133.

Full text
Abstract:
Au cours des dernières décennies, la maîtrise de l'impact sur l'environnement par l'analyse du cycle de vie est devenue un sujet d'actualité dans le secteur du bâtiment. Cependant, il y a quelques problèmes d’échange d'informations entre experts pour la réalisation de diverses études telles que l’évaluation environnementale du bâtiment. Il existe une hétérogénéité entre les bases de données de produits de construction car elles n'ont pas les mêmes caractéristiques et n'utilisent pas la même base pour mesurer l'impact environnemental de chaque produit de construction. En outre, il est encore difficile d'exploiter pleinement le potentiel de liaison entre le BIM, le Web sémantique et les bases de données de produits de construction, car l'idée de les combiner est relativement récente. L'objectif de cette thèse est d'accroître la flexibilité nécessaire pour évaluer l'impact environnemental du bâtiment au moment opportun. Premièrement, notre recherche détermine les lacunes en matière d’interopérabilité dans le domaine AEC (Architecture Engineering and Construction). Ensuite, nous comblons certaines des lacunes rencontrées par la formalisation des informations du bâtiment et la génération de données du bâtiment aux formats Web sémantique. Nous promouvons l'utilisation efficace du BIM tout au long du cycle de vie du bâtiment en intégrant et en référençant les données environnementales sur les produits de construction dans un outil BIM. De plus, la sémantique a été affiner par l'amélioration d'une ontologie bien connue basée sur le bâtiment ; à savoir ifcOWL pour le langage d'ontologie Web (OWL) des IFC (Industry Foundation Classes). Enfin, nous avons réalisé une expérimentation d'une étude de cas d'un petit bâtiment pour notre méthodologie
In previous decades, controlling the environmental impact through lifecycle analysis has become a topical issue in the building sector. However, there are some problems when trying to exchange information between experts for conducting various studies like the environmental assessment of the building. There is also heterogeneity between construction product databases because they do not have the same characteristics and do not use the same basis to measure the environmental impact of each construction product. Moreover, there are still difficulties to exploit the full potential of linking BIM, SemanticWeb and databases of construction products because the idea of combining them is relatively recent. The goal of this thesis is to increase the flexibility needed to assess the building’s environmental impact in a timely manner. First, our research determines gaps in interoperability in the AEC (Architecture Engineering and Construction) domain. Then, we fill some of the shortcomings encountered in the formalization of building information and the generation of building data in Semantic Web formats. We further promote efficient use of BIM throughout the building life cycle by integrating and referencing environmental data on construction products into a BIM tool. Moreover, semantics has been improved by the enhancement of a well-known building-based ontology (namely ifcOWL for Industry Foundation Classes Web Ontology Language). Finally, we experience a case study of a small building for our methodology
APA, Harvard, Vancouver, ISO, and other styles
24

Bertin, Benjamin. "Modélisation sémantique des bases de données d'inventaires en cycle de vie." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0049/document.

Full text
Abstract:
L'analyse des impacts environnementaux de la production de biens et de services est aujourd'hui devenue un enjeu majeur. L'analyse en cycle de vie est la méthode consacrée pour modéliser les impacts environnementaux des activités humaines. L'inventaire en cycle de vie, qui est l'une des étapes de cette méthode, consiste à décomposer les activités économiques en processus interdépendants. Chaque processus a des impacts environnementaux et la composition de ces processus nous donne l'impact cumulé des activités étudiées. Plusieurs entreprises et agences gouvernementales fournissent des bases de données d'inventaires en cycle de vie pour que les experts puissent réutiliser des processus déjà étudiés lors de l'analyse d'un nouveau système. L'audit et la compréhension de ces inventaires nécessite de s'intéresser à un très grand nombre de processus et à leurs relations d'interdépendance. Ces bases de données peuvent comporter plusieurs milliers de processus et des dizaines de milliers de relations de dépendance. Pour les experts qui utilisent les bases de données d'inventaire en cycle de vie, deux problèmes importants sont clairement identifiés : - organiser les processus pour avoir une meilleure compréhensibilité du modèle ; - calculer les impacts d'une modélisation (composition de processus) et, le cas échéant, détecter les raisons de la non convergence du calcul. Dans cette thèse, nous : - mettons en évidence de l'existence de similarités sémantiques entre les processus et leurs relations d'interdépendance et proposons une nouvelle approche pour modéliser les relations d'interdépendance entre les processus d'une base de données d'inventaire. Elle se base sur un étiquetage sémantique des processus à l'aide d'une ontologie et une modélisation multi-niveaux des relations d'interdépendance entre les processus. Nous étudions aussi deux approches déclaratives d'interaction avec ce modèle multi-niveau. - étudions les différentes méthodes de calcul des impacts basées sur des notions classiques d'algèbre linéaire et de théorie des graphes. Nous étudions aussi les conditions de non convergence de ces méthodes en présence de cycle dans le modèle des relations de dépendances. Un prototype implémentant cette approche a montré des résultats probants sur les cas étudiés. Nous avons réalisé une étude de cas de ce prototype sur les processus de production d'électricité aux États-Unis extraits de la base de données d'inventaire en cycle de vie de l'agence environnementale américaine. Ce prototype est à la base d'une application opérationnelle utilisée par l'entreprise
Environmental impact assessment of goods and services is nowadays a major challenge for both economic and ethical reasons. Life Cycle Assessment provides a well accepted methodology for modeling environmental impacts of human activities. This methodology relies on the decomposition of a studied system into interdependent processes in a step called Life Cycle Inventory. Every process has several environmental impacts and the composition of those processes provides the cumulated environmental impact for the studied human activities. Several organizations provide processes databases containing several thousands of processes with their interdependency links that are used by LCA practitioners to do an LCA study. Understanding and audit of those databases requires to analyze a huge amount of processes and their dependency relations. But those databases can contain thousands of processes linked together. We identified two problems that the experts faces using those databases: - organize the processes and their dependency relations to improve the comprehensibility; - calculate the impacts and, if it is not possible, find why it is not feasible. In this thesis, we: - show that there are some semantic similarities between the processes and their dependency relations and propose a new way to model the dependency relations in an inventory database. In our approach, we semantically index the processes using an ontology and we use a multi-layers model of the dependency relations. We also study a declarative approach of this multi-layers approach; - propose a method to calculate the environmental impacts of the processes based on linear algebra and graph theory, and we study the conditions of the feasibility of this calculation when we have a cyclic model. We developed a prototype based on this approach that showed some convincing results on different use cases. We tested our prototype on a case study based on a data set extracted from the National Renewable Energy restricted to the electricity production in the United-States
APA, Harvard, Vancouver, ISO, and other styles
25

Boyaval, Sébastien. "Mathematical modelling and numerical simulation in materials science." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00499254.

Full text
Abstract:
In a first part, we study numerical schemes using the finite-element method to discretize the Oldroyd-B system of equations, modelling a viscoelastic fluid under no flow boundary condition in a 2- or 3- dimensional bounded domain. The goal is to get schemes which are stable in the sense that they dissipate a free-energy, mimicking that way thermodynamical properties of dissipation similar to those actually identified for smooth solutions of the continuous model. This study adds to numerous previous ones about the instabilities observed in the numerical simulations of viscoelastic fluids (in particular those known as High Weissenberg Number Problems). To our knowledge, this is the first study that rigorously considers the numerical stability in the sense of an energy dissipation for Galerkin discretizations. In a second part, we adapt and use ideas of a numerical method initially developped in the works of Y. Maday, A.T. Patera et al., the reduced-basis method, in order to efficiently simulate some multiscale models. The principle is to numerically approximate each element of a parametrized family of complicate objects in a Hilbert space through the closest linear combination within the best linear subspace spanned by a few elementswell chosen inside the same parametrized family. We apply this principle to numerical problems linked : to the numerical homogenization of second-order elliptic equations, with two-scale oscillating diffusion coefficients, then ; to the propagation of uncertainty (computations of the mean and the variance) in an elliptic problem with stochastic coefficients (a bounded stochastic field in a boundary condition of third type), last ; to the Monte-Carlo computation of the expectations of numerous parametrized random variables, in particular functionals of parametrized Itô stochastic processes close to what is encountered in micro-macro models of polymeric fluids, with a control variate to reduce its variance. In each application, the goal of the reduced-basis approach is to speed up the computations without any loss of precision
APA, Harvard, Vancouver, ISO, and other styles
26

Molléro, Roch. "Personnalisation robuste de modèles 3D électromécaniques du cœur. Application à des bases de données cliniques hétérogènes et longitudinales." Thesis, Côte d'Azur, 2017. http://www.theses.fr/2017AZUR4106/document.

Full text
Abstract:
La modélisation cardiaque personnalisée consiste à créer des simulations 3D virtuelles de cas cliniques réels pour aider les cliniciens à prédire le comportement du cœur ou à mieux comprendre certaines pathologies. Dans cette thèse nous illustrons d'abord la nécessité d'une approche robuste d'estimation des paramètres, dans un cas ou l'incertitude dans l'orientation des fibres myocardiques entraîne une incertitude dans les paramètres estimés qui est très large par rapport à leur variabilité physiologique. Nous présentons ensuite une approche originale multi-échelle 0D/3D pour réduire le temps de calcul, basée sur un couplage multi-échelle entre les simulations du modèle 3D et d'une version "0D" réduite de ce modèle. Ensuite, nous dérivons un algorithme rapide de personnalisation multi-échelle pour le modèle 3D. Dans un deuxième temps, nous construisons plus de 140 simulations 3D personnalisées, dans le cadre de deux études impliquant l'analyse longitudinale de la fonction cardiaque : d'une part, l'analyse de l'évolution de cardiomyopathies à long terme, d'autre part la modélisation des changements cardiovasculaires pendant la digestion. Enfin, nous présentons un algorithme pour sélectionner automatiquement des directions observables dans l'espace des paramètres à partir d'un ensemble de mesures, et calculer des probabilités "a priori" cohérentes dans ces directions à partir des valeurs de paramètres dans la population. Cela permet en particulier de contraindre l'estimation de paramètres dans les cas où des mesures sont manquantes. Au final nous présentons des estimations cohérentes de paramètres dans une base de données de 811 cas avec le modèle 0D et 137 cas du modèle 3D
Personalised cardiac modeling consists in creating virtual 3D simulations of real clinical cases to help clinicians predict the behaviour of the heart, or better understand some pathologies from the estimated values of biophysical parameters. In this work we first motivate the need for a consistent parameter estimation framework, from a case study were uncertainty in myocardial fibre orientation leads to an uncertainty in estimated parameters which is extremely large compared to their physiological variability. To build a consistent approach to parameter estimation, we then tackle the computational complexity of 3D models. We introduce an original multiscale 0D/3D approach for cardiac models, based on a multiscale coupling to approximate outputs of a 3D model with a reduced "0D" version of the same model. Then we derive from this coupling an efficient multifidelity optimisation algorithm for the 3D model. In a second step, we build more than 140 personalised 3D simulations, in the context of two studies involving the longitudinal analysis of the cardiac function: on one hand the analysis of long-term evolution of cardiomyopathies under therapy, on the other hand the modeling of short-term cardiovascular changes during digestion. Finally we present an algorithm to automatically detect and select observable directions in the parameter space from a set of measurements, and compute consistent population-based priors probabilities in these directions, which can be used to constrain parameter estimation for cases where measurements are missing. This enables consistent parameter estimations in a large databases of 811 cases with the 0D model, and 137 cases of the 3D model
APA, Harvard, Vancouver, ISO, and other styles
27

Ouertatani, Latifa. "L'enseignement-apprentissage des acides et des bases en Tunisie : une étude transversale du lycée à la première année d'université." Thesis, Bordeaux 2, 2009. http://www.theses.fr/2009BOR24925/document.

Full text
Abstract:
Notre travail constitue une analyse objective de l'évolution des connaissances relatives aux acides et aux bases construites par les élèves et les étudiants suite aux enseignements reçus et en particulier lors de la transition lycée-université. Les travaux antérieurs avaient pour objectif de faire ressortir les conceptions alternatives et les difficultés dans la compréhension de ces concepts que sont susceptible de rencontrer les élèves et étudiants concernés par notre étude. Notre cadre théorique s’articule autour de la transposition didactique, de rapport au savoir et des relations entre les phénomènes et leur modélisation dans l'enseignement de la chimie. D’où l’apparition des questions et hypothèses de recherche. Suite aux analyses effectuées, il en résulte un manque de rigueur dans l’utilisation du vocabulaire et/ou du formalisme conduisant parfois à la présentation de modèles hybrides susceptibles d’induire chez les élèves ou les étudiants des conceptions alternatives ou de conduire à des difficultés de compréhension. Plusieurs conceptions alternatives et difficultés résultent aussi de la non mise en relation claire et explicite par les enseignants des trois registres de la chimie, macroscopique, microscopique et symbolique. Enfin, la transition lycée-université ne remplit pas les conditions favorables à un approfondissement conceptuel des savoirs vus en terminale, elle permet tout au plus leur réappropriation par certains
This work consisted of a complete longitudinal study of the didactic transposition relative to the acids and base conceptual field from the upper-secondary school to the first university year in Tunisia. A literature review led to the formulation of the research questions and hypotheses and to the choice of a analysis theoretical framework, the anthropological theory of didactics of Chevallard and the link between phenomena and their modelling in the chemistry education. Having made a historical analysis of the reference knowledge construction we realized a study of the institutional relationship of the pupils/students with the objects of knowledge for the various levels of education. We found a lack of strictness in the use of the vocabulary and\or formalism sometimes leading to the presentation of hybrid models susceptible to induce pupils/ student’s alternative conceptions or to lead to difficulties of understanding. We put in evidence that the tasks and techniques which repeat during the various levels of education concern the pH calculations and titrations. In the taught knowledge we have identify some inaccuracies and some inadequacy which can be at the origin of certain difficulties or alternative conceptions. Concerning the evolution of the knowledge learnt further to the successive educations, we put in evidence that the education concerning the acid and base concepts from the second year of upper-secondary school (grade 10) to the first university year leads gradually to the passage of a "phenomenological model" to a "symbolic model", then to a " pithy formula model", but little to the integration of the Bronsted scientific model. We also showed that it is in the difficulty to linking the three registers of chemistry, macroscopic, microscopic and symbolic, that lives the main difficulties encountered by pupils/students and their tendency to have recourse to use alternative reasoning’s. Besides, the conceptions and the identified alternative reasoning’s seem due to a lack of strictness in the presentation of the taught knowledge contents. Finally, if we consider the conceptual evolution during the secondary school-university transition, we put in evidence that this transition does not perform the favourable conditions to a conceptual analysis of the knowledge’s seen in grade 12, it allows, at the most, to their another appropriation by some. Propositions for improve this evolution were formulated
APA, Harvard, Vancouver, ISO, and other styles
28

Falk, Matthew Gregory. "Incorporating uncertainty in environmental models informed by imagery." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/33235/1/Matthew_Falk_Thesis.pdf.

Full text
Abstract:
In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.
APA, Harvard, Vancouver, ISO, and other styles
29

Gamet, Arnaud. "Etude et mise en oeuvre de transitions passives aux interfaces circuit/boîtier pour les bases de temps intégrées résonantes." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0002.

Full text
Abstract:
L’intégration des oscillateurs dans les microcontrôleurs est aujourd’hui un enjeu industriel majeur suscitant une forte concurrence entre les principaux acteurs du marché. En effet, les oscillateurs sinusoïdaux sont des circuits indispensables, et sont majoritairement basés sur l’utilisation d’un résonateur à quartz ou MEMS externe. De plus en plus d’investigations sont menées afin d’intégrer des dispositifs résonants dans les boîtiers et éviter ainsi toutes les contraintes extérieures limitant les performances de l’oscillateur. En ce sens, nous avons étudié dans ce travail le comportement électrique, et notamment inductif, des liaisons filaires permettant de connecter une puce à son boîtier de protection. L’avantage d’utiliser ce composant passif est principalement son faible coût. Ce composant a été caractérisé en utilisant plusieurs méthodologies de modélisations et de mesures sur une large plage fréquentielle. Cette étude propose un modèle permettant aux concepteurs d’utiliser une caractéristique électrique équivalente dans une technologie CMOS standard. L’intégration du composant dans une cellule résonante est démontrée au sein d’un prototype
Nowadays, the integration of oscillators into microcontrollers is a major industrial challenge which involves a large competition between the main actors of this market. Indeed, sine wave oscillators are essential circuits, and are fore the most part based on external crystal or MEMs resonators. More and more investigations are carried out in order to integrate the resonant structure into the package, and avoid all external constraints able to restrict the performances of the oscillator. With this in mind, we studied in this work the electrical behavior, in particular the inductive behavior of bond wires which are electrical connections between a die and its package. The main advantage to use this type of component is its low cost of manufacturing. This passive component has been characterized using several measurement tools on a wide range of frequencies. A RLC model has been presented, allowing analogue designers to use an electrical equivalent circuit in standard CMOS technology. The integration of the passive component in a resonant cell has been demonstrated in a prototype
APA, Harvard, Vancouver, ISO, and other styles
30

Karlovets, Ekaterina. "Spectroscopie d'absorption à très haute sensitivité de différents isotopologues du dioxyde de carbone." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENY027/document.

Full text
Abstract:
Le travail présenté porte sur l'analyse et l'interprétation théorique du spectre d'absorption du dioxyde de carbone dans le proche infrarouge: spectroscopie d'absorption ultrasensible et modélisation théorique des positions et intensités des raies d'absorption. Ce travail vise à affiner et étendre l'ensemble des paramètres des opérateurs effectifs nécessaires à la génération des listes de raies pour les bases de données spectroscopiques. Les résultats obtenus peuvent être divisés en trois parties.Dans la première partie, nous avons établi les équations relatives aux paramètres q0 J, qJ, q2J and q3J-types des éléments de matrice de l'opérateur moment dipolaire effectif en fonction des dérivées du moment dipolaire et des constantes de force obtenues par transformation de contact pour les isotopologues: 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 17O12C18O. En utilisant ces équations et les relations isotopiques obtenues pour les constantes moléculaires, des calculs numériques nous ont permis de dériver les paramètres effectifs du moment dipolaire des séries de transitions ∆P= 0, 2, 4, 6 et 8, pour les six isotopologues asymétriques ci-dessus. La comparaison des paramètres rapportés dans la littérature et obtenus dans ce travail a été effectuée et discutée.La deuxième partie est consacrée à l'analyse du spectre d'absorption du dioxyde de carbone hautement enrichi en 18O, enregistré avec une très grande sensibilité par spectroscopie CW-Cavity Ring Down Spectroscopy entre 5851 et 6990 cm-1 (1.71-1.43 µm). Au total, 19526 transitions appartenant à onze isotopologues (12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 12C18O2, 17O12C18O, 12C17O2, 13C18O2 et 17O13C18O) ont été attribuées sur la base des prévisions du modèle de l'Hamiltonien effectif. Toutes les bandes identifiées correspondent aux séries de transitions ∆P= 8, 9 et 10, où P=2V1+V2+3V3 est le nombre de polyade (V1,V2,V3 sont les nombres quantiques vibrationnels). Les intensités des transitions les plus faibles sont de l'ordre de 2×10-29 cm/molécule. Les paramètres spectroscopiques précis de 211 bandes appartenant à neuf isotopologues ont été calculés. Au total, neuf perturbations de résonance de la structure rotationnelle de l'état supérieur ont été observées et identifiées pour les isotopologues 16O12C18O, 12C18O2, 13C18O2, 16O13C18O, 16O12C17O et 17O12C18O. Un nouvel ensemble de paramètres du Hamiltonien effectif a été obtenu par un ajustement global de nos données et de l'ensemble des données de la littérature. En utilisant une approche similaire, les ajustements globaux des intensités obtenues pour les transitions des séries ∆P= 8, 9 and 10 ont permis d'obtenir l'ensemble de paramètres effectifs du moment dipolaire.Dans la troisième partie, nous présentons l'analyse du spectre CW-CRDS du dioxyde de carbone naturel entre 7909 et 8370 cm-1 (1.26-1.19 µm). Au total, 3425 transitions appartenant à 61 bandes de six isotopologues - 12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O et 16O13C17O- ont été attribuées. Dans la région spectrale étudiée, toutes les bandes appartiennent à la série ∆P=11. Les paramètres spectroscopiques précis des états supérieurs de 57 bandes ont été obtenus à partir d'un ajustement des positions mesurées (rms typiques des écarts de l'ordre de 0.6×10-3 cm-1). Les ajustements globaux des intensités obtenues ont permis de déterminer les paramètres effectifs du moment dipolaire de la série ∆P=11 des six isotopologues étudiés.Les résultats obtenus ont eu un grand impact sur la modélisation globale des spectres de du dioxyde de carbone. Ils ont permis d'affiner et d'étendre les mesures existantes et d'améliorer considérablement les paramètres de l'Hamiltonien et moment dipolaire effectifs. Les résultats obtenus ont d'ores et déjà été intégrés dans les bases de données spectroscopiques de CO2 les plus couramment utilisées (HITRAN, GEISA, CDSD)
This thesis is devoted to the investigation of the high resolution near infrared spectra of carbon dioxide and includes experimental measurements and theoretical modeling of line positions and intensities and refinement and extension of the set of effective operator parameters. The obtained results can be divided by three parts:In the first part, we present the equations for the q0 J, qJ, q2J and q3J-types parameters of the matrix elements of the effective dipole-moment operator in terms of the dipole-moment derivatives and force field constants derived by means of contact transformation method for the following carbon dioxide isotopologues: 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 17O12C18O and 17O13C18O. Using these equations and the obtained isotopic relations for the molecular constants, we derived the effective dipole-moment parameters for the ∆P= 0, 2, 4, 6 and 8 series of transitions of the six above asymmetric carbon dioxide isotopologues (P=2V1+V2+3V3 is the polyad number where V1,V2 and V3 are the vibrational quantum numbers). The comparison of the parameters reported in the literature and obtained in this work is performed and discussed.The second part is devoted to the analysis of the room temperature absorption spectrum of highly 18O enriched carbon dioxide recorded by very high sensitivity CW-Cavity Ring Down Spectroscopy between 5851 and 6990 cm-1 (1.71-1.43 µm ). Overall, 19526 transitions belonging to eleven isotopologues (12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 12C18O2, 17O12C18O, 12C17O2, 13C18O2 and 17O13C18O) were assigned on the basis of the predictions of the effective Hamiltonian model. Line intensities of the weakest transitions are on the order of 2×10-29 cm/molecule. The line positions were determined with accuracy better than 1×10-3 cm-1 while the absolute line intensities are reported with an uncertainty better than 10%. All the identified bands correspond to the ∆P= 8, 9 and 10 series of transitions. The accurate spectroscopic parameters for a total of 211 bands belonging to nine isotopologues were derived. Nine resonance perturbations of the upper state rotational structure were identified for 16O12C18O, 12C18O2, 13C18O2, 16O13C18O, 16O12C17O and 17O12C18O isotopologues. New sets of Hamiltonian parameters have been obtained by the global modeling of the line positions within the effective Hamiltonian approach. Using a similar approach, the global fits of the obtained intensity values of the ∆P= 8, 9 and 10 series of transitions were used to derive the corresponding set of effective dipole moment parameters.In the third part, we report the analysis of the absorption spectrum of natural carbon dioxide by high sensitivity CW-Cavity Ring Down spectroscopy between 7909 and 8370 cm-1 (1.26-1.19 µm). Overall, 3425 transitions belonging to 61 bands of 12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O and 16O13C17O were assigned. In the studied spectral region, all bands correspond to ∆P= 11 series of transitions. The accurate spectroscopic parameters of the upper states of 57 bands were derived from a fit of the measured line positions (typical rms deviations of about 0.6×10-3 cm-1). The global fits of the obtained intensity values of the ∆P= 11 series of transitions were used to determine the corresponding set of effective dipole moment parameters of the six studied isotopologues.The large set of new observations obtained in this thesis has an important impact on the global modeling of high resolution spectra of carbon dioxide. It has allowed refining and extending the sets of effective dipole moment and effective Hamiltonian parameters. The obtained results have allowed improving importantly the quality of the line positions and intensities in the most currently used spectroscopic databases of carbon dioxide (HITRAN, GEISA, CDSD)
APA, Harvard, Vancouver, ISO, and other styles
31

Oshurko, Ievgeniia. "Knowledge representation and curation in hierarchies of graphs." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN024.

Full text
Abstract:
L'extraction automatique des intuitions et la construction de modèles computationnels à partir de connaissances sur des systèmes complexes repose largement sur le choix d'une représentation appropriée. Ce travail s'efforce de construire un cadre adapté pour la représentation de connaissances fragmentées sur des systèmes complexes et sa curation semi-automatisé.Un système de représentation des connaissances basé sur des hiérarchies de graphes liés à l'aide d'homomorphismes est proposé. Les graphes individuels représentent des fragments de connaissances distincts et les homomorphismes permettent de relier ces fragments. Nous nous concentrons sur la conception de mécanismes mathématiques,basés sur des approches algébriques de la réécriture de graphes, pour la transformation de graphes individuels dans des hiérarchies qui maintient des relations cohérentes entre eux.De tels mécanismes fournissent une piste d'audit transparente, ainsi qu'une infrastructure pour maintenir plusieurs versions des connaissances.La théorie développée est appliquée à la conception des schémas pour les bases de données orientée graphe qui fournissent des capacités de co-évolution schémas-données.Ensuite, cette théorie est utilisée dans la construction du cadre KAMI, qui permet la curation des connaissances sur la signalisation dans les cellules. KAMI propose des mécanismes pour une agrégation semi-automatisée de faits individuels sur les interactions protéine-protéine en corpus de connaissances, la réutilisation de ces connaissances pour l'instanciation de modèles de signalisation dans différents contextes cellulaires et la génération de modèles exécutables basés sur des règles
The task of automatically extracting insights or building computational models fromknowledge on complex systems greatly relies on the choice of appropriate representation.This work makes an effort towards building a framework suitable for representation offragmented knowledge on complex systems and its semi-automated curation---continuouscollation, integration, annotation and revision.We propose a knowledge representation system based on hierarchies of graphs relatedwith graph homomorphisms. Individual graphs situated in such hierarchies representdistinct fragments of knowledge and the homomorphisms allow relating these fragments.Their graphical structure can be used efficiently to express entities and their relations. Wefocus on the design of mathematical mechanisms, based on algebraic approaches to graphrewriting, for transformation of individual graphs in hierarchies that maintain consistentrelations between them. Such mechanisms provide a transparent audit trail, as well as aninfrastructure for maintaining multiple versions of knowledge.We describe how the developed theory can be used for building schema-aware graphdatabases that provide schema-data co-evolution capabilities. The proposed knowledgerepresentation framework is used to build the KAMI (Knowledge Aggregation and ModelInstantiation) framework for curation of cellular signalling knowledge. The frameworkallows for semi-automated aggregation of individual facts on protein-protein interactionsinto knowledge corpora, reuse of this knowledge for instantiation of signalling models indifferent cellular contexts and generation of executable rule-based models
APA, Harvard, Vancouver, ISO, and other styles
32

Pham, thi Tam ngoc. "Caractérisation et modélisation du comportement thermodynamique du combustible RNR-Na sous irradiation." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4044/document.

Full text
Abstract:
Au-dessus d'un taux de combustion seuil ≥ 7 at %, les produits de fission volatils Cs, I, et Te ou métalliques (Mo) sont partiellement relâchés hors du combustible et finissent par constituer une couche de composés de PF qui remplit progressivement le jeu existant entre la périphérie de la pastille et la surface interne de la gaine en acier inoxydable. Nous appelons cette couche JOG pour Joint Oxyde-Gaine. Mon sujet de thèse est axé sur l'étude thermodynamique du système (Cs, I, Te, Mo, O) + (U, Pu) ainsi que sur l'étude de la diffusion de ces produits de fission à travers le combustible vers le jeu combustible-gaine pour former le JOG.L'étude thermodynamique constitue la première étape de mon travail. Sur la base d'une analyse critique des données expérimentales issues de la littérature, les systèmes Cs-Te, Cs-I, Cs-Mo-O ont été modélisés par la méthode CALPHAD. En parallèle, une étude expérimentale a été entreprise pour valider la modélisation CALPHAD du système binaire Cs-Te. Dans une deuxième étape, les données thermodynamiques résultant de la modélisation CALPHAD ont été introduites dans la base de données du code de calcul thermodynamique ANGE (code interne au CEA dérivé du logiciel SOLGASMIX) dont la finalité est le calcul de la composition chimique du combustible irradié. Dans une troisième étape, le code de calcul thermodynamique ANGE (Advanced Numeric Gibbs Energy minimiser) a été couplé avec le code de simulation du comportement thermomécanique du combustible des RNR-Na GERMINAL V2
For a burn-up higher than 7 at%, the volatile FP like Cs, I and Te or metallic (Mo) are partially released from the fuel pellet in order to form a layer of compounds between the outer surface of the fuel and the inner surface of the stainless cladding. This layer is called the JOG, french acronym for Joint-Oxyde-Gaine.My subject is focused on two topics: the thermodynamic study of the (Cs-I-Te-Mo-O) system and the migration of those FP towards the gap to form the JOG.The thermodynamic study was the first step of my work. On the basis of critical literature survey, the following systems have been optimized by the CALPHAD method: Cs-Te, Cs-I and Cs-Mo-O. In parallel, an experimental study is undertaken in order to validate our CALPHAD modelling of the Cs-Te system. In a second step, the thermodynamic data coming from the CALPHAD modelling have been introduced into the database that we use with the thermochemical computation code ANGE (CEA code derived from the SOLGASMIX software) in order to calculate the chemical composition of the irradiated fuel versus burn-up and temperature. In a third and last step, the thermochemical computation code ANGE (Advanced Numeric Gibbs Energy minimizer) has been coupled with the fuel performance code GERMINAL V2, which simulates the thermo-mechanical behavior of SFR fuel
APA, Harvard, Vancouver, ISO, and other styles
33

Vandi, Matteo. "Valutazione della sicurezza e progettazione di interventi di adeguamento statico e funzionale del cavalcaferrovia di Strada dell’Alpo a Verona." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
La seguente tesi è finalizzata alla valutazione della sicurezza e alla progettazione di interventi di adeguamento statico e funzionale del cavalcaferrovia dell’Alpo. Le analisi svolte si basano sulle indicazioni presenti nella normativa italiana, in particolare nelle NTC 2018. La prima fase è quella di conoscenza dell’opera oggetto di studio:questa prende il nome di analisi storico-critica.Nella seconda fase si esegue un accurato rilievo geometrico strutturale e funzionale dell’opera, allo scopo di validare e aggiornare i documenti posseduti; in tale fase si raccolgono anche informazioni relative al degrado in cui versa l’opera. Per la valutazione del degrado vengono compilate delle schede difettologiche per ciascun elemento investigato. Successivamente si esegue una progettazione simulata della struttura, al fine di risalire alle tecniche di progettazione e di rilevare eventuali carenze. A tale scopo si assumono due importanti ipotesi: la prima riguarda l’uso dei carichi dell’epoca, la seconda considera i materiali e le sezioni integre. Si procederà poi ad eseguire una valutazione della sicurezza, definendo due ulteriori ipotesi: la prima relativa ai carichi attualizzati alla normativa NTC 2018 e la seconda relativa ai materiali degradati. Dalla valutazione della sicurezza, si ricaverà:l’indice statico, ζV,i e l’indice di sicurezza sismico, ζE,i. La fase successiva sarà quella di modellazione della struttura, utilizzando il software Midas Gen, allo scopo di ricavare i valori delle sollecitazioni che agiscono sugli elementi dell’opera. In seguito, si procederà ad eseguire, sul modello in questione, delle verifiche statiche e sismiche, riferendosi alla NTC 2018 e confrontando i risultati con le normative dell’epoca di progettazione. Infine, in seguito alle criticità emerse dalla valutazione della sicurezza della struttura, verranno proposti degli interventi provvisori, degli interventi definitivi e funzionali al fine di ottenere l’adeguamento statico dell’opera.
APA, Harvard, Vancouver, ISO, and other styles
34

Platanakis, Emmanouil, and C. Sutcliffe. "Asset-liability modelling and pension schemes: the application of robust optimization to USS." 2015. http://hdl.handle.net/10454/8146.

Full text
Abstract:
yes
This paper uses a novel numerical optimization technique – robust optimization – that is well suited to solving the asset–liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximize the Sharpe ratio. We disaggregate pension liabilities into three components – active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes–Stein and Black–Litterman models as well as the actual USS investment decisions. Over a 144-month out-of-sample period, robust optimization is superior to the four benchmarks across 20 performance criteria and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.
APA, Harvard, Vancouver, ISO, and other styles
35

Schmidt, Philip J. "Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data." Thesis, 2010. http://hdl.handle.net/10012/5596.

Full text
Abstract:
Parameters associated with the detection and quantification of microorganisms (or discrete particles) in water such as the analytical recovery of an enumeration method, the concentration of the microorganisms or particles in the water, the log-reduction achieved using a treatment process, and the sensitivity of a detection method cannot be measured exactly. There are unavoidable random errors in the enumeration process that make estimates of these parameters imprecise and possibly also inaccurate. For example, the number of microorganisms observed divided by the volume of water analyzed is commonly used as an estimate of concentration, but there are random errors in sample collection and sample processing that make these estimates imprecise. Moreover, this estimate is inaccurate if poor analytical recovery results in observation of a different number of microorganisms than what was actually present in the sample. In this thesis, a statistical framework (using probabilistic modelling and Bayes’ theorem) is developed to enable appropriate analysis of microorganism concentration estimates given information about analytical recovery and knowledge of how various random errors in the enumeration process affect count data. Similar models are developed to enable analysis of recovery data given information about the seed dose. This statistical framework is used to address several problems: (1) estimation of parameters that describe random sample-to-sample variability in the analytical recovery of an enumeration method, (2) estimation of concentration, and quantification of the uncertainty therein, from single or replicate data (which may include non-detect samples), (3) estimation of the log-reduction of a treatment process (and the uncertainty therein) from pre- and post-treatment concentration estimates, (4) quantification of random concentration variability over time, and (5) estimation of the sensitivity of enumeration processes given knowledge about analytical recovery. The developed models are also used to investigate alternative strategies that may enable collection of more precise data. The concepts presented in this thesis are used to enhance analysis of pathogen concentration data in Quantitative Microbial Risk Assessment so that computed risk estimates are more predictive. Drinking water research and prudent management of treatment systems depend upon collection of reliable data and appropriate interpretation of the data that are available.
APA, Harvard, Vancouver, ISO, and other styles
36

Mokobane, Reshoketswe. "Application of small area estimation techniques in modelling accessibility of water, sanitation and electricity in South Africa : the case of Capricorn District." Thesis, 2019. http://hdl.handle.net/10386/2945.

Full text
Abstract:
Thesis (Ph.D. (Statistics)) -- University of Limpopo, 2019
This study presents the application of Direct and Indirect methods of Small AreaEstimation(SAE)techniques. Thestudyisaimedatestimatingthetrends and the proportions of households accessing water, sanitation, and electricity for lighting at small areas of the Limpopo Province, South Africa. The study modified Statistics South Africa’s General Household Survey series 2009-2015 and Census 2011 data. The option categories of three variables: Water, Sanitation and Electricity for lighting, were re-coded. Empirical Bayes and Hierarchical Bayes models known as Markov Chain Monte Carlo (MCMC) methods were used to refine estimates in SAS. The Census 2011 data aggregated in ‘Supercross’ was used to validate the results obtained from the models. The SAE methods were applied to account for the census undercoverage counts and rates. It was found that the electricity services were more prioritised than water and sanitation in the Capricorn District of the Limpopo Province. The greatest challenge, however, lies with the poor provision of sanitation services in the country, particularly in the small rural areas. The key point is to suggestpolicyconsiderationstotheSouthAfricangovernmentforfutureequitable provisioning of water, sanitation and electricity services across the country.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhang, Xiaohe. "Exploring sediment dynamics in coastal bays by numerical modelling and remote sensing." Thesis, 2020. https://hdl.handle.net/2144/42052.

Full text
Abstract:
Coastal bays and salt marshes are buffer zones located at the interface between land and ocean, and provide ecologically and commercially important services worldwide. Unfortunately, their location makes them vulnerable and sensitive to sea-level rise (SLR), reduced sediment loads and anthropogenic modifications of the shoreline. Sediment budget and sediment availability are direct metrics for evaluating the resilience of salt marshes and coastal bays to various stressors (e.g. SLR). Salt marshes requires adequate sediment inputs to maintain their elevation with respect to sea level. Understanding sediment trajectories, sediment fluxes and sediment trapping capacities in different geomorphic unit facilitates efficient restorations and coastal management. In this research I used remote sensing, field observations and numerical modelling in the Plum Island Sound in Massachusetts, USA, to explore mechanisms controlling sediment dynamics and their feedbacks with SLR. The analysis of remote-sensed suspended sediment concentrations (SSC) reveals that a 5-year record (2013-2018) is sufficient to capture a representative range of meteorological and tidal conditions required to determine the main drivers of SSC dynamics in hydrodynamically-complex and small-scale coastal bays. The interplay between river and tidal flows dominated SSC dynamics in this estuary, whereas wind-driven resuspension had a more moderate effect. The SSC was higher during spring because of increased river discharge due to snowmelt. Tidal asymmetry also enhanced sediment resuspension during flood tides, possibly favoring deposition on marsh platforms. Together, water level, water-level rate of change, river discharge and wind speed were able to explain > 60% of the variability in the main channel SSC, thereby facilitating future prediction of SSC from these readily available variables. To determine the fate of cohesive sediments and spatial variations of trapping capacity in the system, a high-resolution (20 m) numerical model coupled to a vegetation module was developed. The results highlight the importance of the timing between sediment inputs and tidal phase and show that sediment discharged from tidal rivers deposit within the rivers themselves or in adjacent marshes. Most sediment is deposited in shallow tidal flats and channels and is unable to penetrate farther inside the marshes because of the limited water depths and velocities on the marsh platform. Trapping capacity of sediment in different intertidal subdomains decreases logarithmically with the ratio between advection length and the typical length of channels and tidal flats. Moreover, sediment deposition on the marsh decreases exponentially with distance from the channels and marsh edge. This decay rate is a function of settling velocity and the maximum value of water depth and velocity on the marsh platform. Bed sediment compositions were generated to further explore feedbacks between SLR, sediment dynamics and morphological changes. The results show SLR increases tidal prism and inundation depth, facilitating sediment deposition on the marsh platform. At the same time, SLR enhances ebb-dominated currents and increases sediment resuspension, reducing the sediment-trapping capacity of tidal flats and bays, leading to a negative sediment budget for the entire system. This bimodal distribution of sediment budget trajectories will have a profound impact on the morphology of coastal bays, increasing the difference in elevation between salt marshes and tidal flats and potentially affecting intertidal ecosystems. The results also clearly indicate that landforms lower with respect to the tidal frame are more affected by SLR than salt marshes. Therefore, Salt marshes, shallow bays, tidal flats, and barrier islands are inherently and physically connected systems, and evaluating the effect of SLR on salt marshes should involve all these units.
APA, Harvard, Vancouver, ISO, and other styles
38

Cordeiro, Margarida Machado. "Permeation of Weak Acids and Bases Through Lipid Bilayers – Modelling and Validation of a pH Variation Assay." Master's thesis, 2022. http://hdl.handle.net/10316/99393.

Full text
Abstract:
Dissertação de Mestrado em Química apresentada à Faculdade de Ciências e Tecnologia
A descoberta e desenvolvimento de fármacos é um processo iterativo e muito complexo. A insuficiente absorção, distribuição, eliminação, eficácia e segurança dos candidatos a fármacos são os principais obstáculos no desenvolvimento de novas terapias. As membranas lipídicas são a principal barreira à difusão dos solutos e determinam a disponibilidade destes compostos nos tecidos. Prever a velocidade de permeação de solutos in vivo é crucial, e existem vários estudos in vitro para entender e quantificar esse processo. O ensaio de variação de pH é particularmente relevante porque permite seguir a permeação de ácidos e bases fracas, mesmo quando estes não apresentam propriedades óticas. No entanto, existem alguns artefactos, a validade deste ensaio não é amplamente aceite e os coeficientes de permeabilidade nem sempre são consistentes com aqueles obtidos por outros métodos.Neste trabalho foi desenvolvido um modelo cinético para a permeação de ácidos e bases fracos através de membranas lipídicas que considera explicitamente os dois folhetos da membrana. As simulações desses processos permitiram identificar alguns princípios do desenho experimental necessários para não comprometer a precisão do método na previsão dos coeficientes de permeabilidade. Devem ser utilizadas vesículas lipídicas de grandes dimensões e a variação de pH deve ser inferior a 0.25 unidades. Estas conclusões resultaram da análise do efeito da topologia do sistema, da lipofilicidade do soluto e das concentrações do soluto e da sonda de fluorescência nos números de ocupação por vesícula e da comparação da dinâmica de permeação do soluto e da variação da fluorescência. Ao analisar o efeito destes parâmetros no coeficiente de permeabilidade, verificou-se que a equação comummente utilizada Papp = β × r/3 é inadequada para avaliar o coeficiente de permeabilidade de ácidos e bases fracas. Isso resulta do facto de vários pressupostos e aproximações considerados na derivação desta equação não serem válidos nas condições do ensaio.Este trabalho também se focou na análise do efeito de vários parâmetros (constante de velocidade de translocação, pKa do soluto, permeabilidades de protão e do potássio) na cinética de permeação do soluto e na variação do pH interno resultante. A permeação de ácidos fracos resulta numa rápida diminuição do pH, seguida de uma recuperação mais lenta do seu valor inicial. Na permeação de bases fracas é observado um efeito simétrico. Se apenas a espécie neutra permear a membrana, a dinâmica do soluto é bem descrita por uma função monoexponencial. No entanto, se a permeação das espécies carregadas for incluída (ainda que num processo mais lento), a acumulação de soluto pode seguir uma cinética bifásica. Neste caso, a permeabilidade aparente do soluto deve ser calculada a partir de uma constante característica média (α1 β1 + α2 β2). Porém, não é possível calculá-la com precisão a partir da dinâmica de fluorescência, uma vez que não existe uma relação direta entre as constantes características e os termos pré-exponenciais. Usar apenas a constante característica do processo rápido resultará numa sobrestimação do coeficiente de permeabilidade ao soluto. A fase lenta da permeação do soluto não é influenciada apenas pela permeabilidade das espécies de soluto carregadas, mas também pela permeabilidade de outras espécies carregadas em solução como H+/OH‒ e os outros iões responsáveis pela dissipação do potencial eletrostático, gerado pelo desequilíbrio de carga.Foram realizadas algumas experiências de equilíbrio de pH para estimar a permeabilidade dos iões H+/OH‒ e avaliar o efeito da valinomicina, um ionóforo com alta especificidade para K+. No entanto, estes objetivos não foram alcançados com sucesso, uma vez que os resultados experimentais obtidos eram bastante diferentes das variações previstas pelo nosso modelo cinético. Concluiu-se que as discrepâncias se devem principalmente à capacidade tampão de pH adicional presente no interior das vesículas, possivelmente devido à presença de ácido carbónico. O aumento da capacidade tampão resulta na necessidade de permeação de uma maior quantidade de iões H+/OH‒ para reestabelecer o equilíbrio de pH, o que, por sua vez, leva ao desenvolvimento de um maior desequilíbrio de cargas entre os meios aquosos externo e interno das vesículas. Assim, o potencial eletrostático gerado opõe-se ao movimento dos iões H+/OH‒ e impede o reequilibrar do pH. O completo reequilíbrio requer o movimento adicional de cargas, como K+ na presença de valinomicina, o que explica o forte efeito da valinomicina observado experimentalmente.
Drug discovery and development is an iterative and very complex process. The poor absorption, distribution, clearance, efficiency, and safety of drug candidates are the major pitfall in the development of new therapies. Lipid membranes represent the main barrier to the free diffusion of solutes and determine the availability of these compounds in the tissues. Predicting the rate at which solutes permeate in vivo barriers is crucial, and there are several in vitro studies valuable for this goal. The pH-variation assay is particularly relevant because it allows following the permeation of weak acids and bases even when they do not exhibit optical properties. However, there are some artefacts, its validity is not widely accepted, and the permeability coefficients are not always consistent with those from other methods.In this work, a kinetic model was developed for the permeation of weak acids and bases through lipid membrane barriers that considers explicitly the two membrane leaflets. The simulations of these processes were able to identify some experiment design principles to not compromise the accuracy of the method in the prediction of permeability coefficients. The assay must be employed with larger vesicles, and the pH variation must be under 0.25 units. These conclusions were achieved by analysing the effect of the topology of the system, solute lipophilicity, and solute and fluorescent pH probe concentrations on the occupancy numbers per vesicle and by comparing the dynamics of solute accumulation and fluorescence variation. When analysing the effect of these parameters on the permeability coefficient it was found that the widely used equation Papp = β × r/3 is inappropriate to assess the permeability coefficient of drug-like weak acids and bases. This results from the failure of several assumptions and approximations considered in the derivation of this equation.This work also examined the effect of several parameters (flip-flop rate constant, solute’s pKa, proton, and potassium permeabilities) on the kinetics of solute permeation and the resulting pH variation inside the vesicles. The permeation of weak acids leads to a fast decrease of the pH, which is followed by a slow recovery to the initial pH value, and a symmetric effect is observed for the permeation of weak bases. If only the neutral solute species may permeate the membrane, the solute equilibration is well described by a mono-exponential function. However, if permeation of charged species is included (albeit as a slower process), the accumulation of solute may follow a biphasic kinetics. In this case, the solute apparent permeability should be calculated from a weighted characteristic constant (α1 β1 + α2 β2). However, when using the fluorescence dynamics, this is not possible to perform accurately due to a non-direct relationship between the characteristic constants and pre-exponential terms. When using only the characteristic constants of the fast process, the solute permeability coefficient is overestimated. It was observed that the slow phase in solute accumulation is not influenced only by the permeability of the charged solute species, but also by the permeability of other charged species in solution such as H+/OH‒ and the ions responsible for the dissipation of electrostatic potentials generated by charge unbalance.Some pH equilibration experiments were performed to estimate the permeability of H+/OH‒ and assess the effect of valinomycin, an ionophore with high specificity for K+. However, our objectives were not successfully achieved as the experimental results obtained were quite different from the time courses predicted by our kinetic model. We concluded that the main reason for the discrepancies was the additional pH buffer capacity present inside the vesicles, possibly due to the presence of carbonic acid. The increased buffer capacity leads to a higher amount of H+/OH‒ required to achieve pH equilibration, which in turn leads to the development of a larger charge unbalance between the aqueous media inside and outside the vesicles. The electrostatic potential thus generated hinders the movement of additional H+/OH‒ and prevents pH equalisation. The full equalization requires the countermovement of additional charges, such as K+ in the presence of valinomycin, which explains the strong effect of valinomycin observed experimentally.
FCT
FCT
FCT
APA, Harvard, Vancouver, ISO, and other styles
39

Link, Roman Mathias. "The role of tree height and wood density for the water use, productivity and hydraulic architecture of tropical trees." Thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-13EF-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Almeida, António Manuel Galinho Pires de. "Modelo de sistemas de informação técnica baseado numa plataforma SIG." Master's thesis, 2006. http://hdl.handle.net/10362/3642.

Full text
Abstract:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
O presente trabalho pretende desenvolver um modelo conceptual de um Sistema de Informação Técnica (SIT) baseado numa plataforma SIG, aplicado à Industria, mais especificamente à rede eléctrica de uma fábrica, apresentando ao mesmo tempo a metodologia a seguir na integração do modelo numa organização, e as vantagens que uma ferramenta como esta poderá proporcionar. O modelo conceptual do SIT começará por ser especificado e documentado em linguagem UML, tendo-se identificado neste processo, dois subsistemas na sua constituição, que serão posteriormente transpostos para uma plataforma SIG e para uma plataforma SGBD relacional, tendo-se recorrido para o efeito, ao modelo entidade-atributo-relação (EAR) de [CHEN, 1976] e às regras de transposição de [BENNET et al., 1999]. Concluída a transposição do modelo para as plataformas SIG e SGBD, realizaram-se simulações da sua aplicabilidade a uma grande organização, mais concretamente à VWAutoeuropa, empresa seleccionada para o estudo de caso. As simulações contemplaram os três tipos de análise suportados pelo SIT, nomeadamente, análise de problemas rotineiros de localização de equipamentos, análise de problemas com recurso à integração de informação de outros sistemas de informação, como o SAP e o Sistema de Gestão de Energia (SGE) e análise de problemas complexos com recurso a operações de geoprocessamento, em que neste caso o (SIT) pode ser encarado como um sistema de apoio à decisão. O modelo criado deixa antever que existe a possibilidade de expansão a outros tipos de infraestruturas, nomeadamente às redes de água, saneamento, gás e informática. O tipo de abordagem que foi feita ao longo da presente dissertação, através da inclusão de vários tipos de modelos, tornam esta dissertação numa espécie de Guideline a utilizar na integração de SIG’s ou outros Sistemas de Informação em organizações.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography