Dissertations / Theses on the topic 'Bayes Modelling'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 40 dissertations / theses for your research on the topic 'Bayes Modelling.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Dowman, Mike. "Colour Terms, Syntax and Bayes Modelling Acquisition and Evolution." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/558.
Full textDowman, Mike. "Colour Terms, Syntax and Bayes Modelling Acquisition and Evolution." University of Sydney. Information Technologies, 2004. http://hdl.handle.net/2123/558.
Full textRevie, Matthew. "Evaluation of bayes linear modelling to support reliability assessment during procurement." Thesis, University of Strathclyde, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487866.
Full textStarobinskaya, Irina. "Structural modelling of operational risk in financial institutions : application of Bayesian networks and balanced scorecards to IT infrastructure risk modelling /." Berlin : Pro Business, 2008. http://d-nb.info/991725328/04.
Full textSteinberg, Daniel. "An Unsupervised Approach to Modelling Visual Data." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9415.
Full textJones, Matthew James. "Bayes linear strategies for the approximation of complex numerical calculations arising in sequential design and physical modelling problems." Thesis, Durham University, 2017. http://etheses.dur.ac.uk/12529/.
Full textSalge, Christoph. "Information theoretic models of social interaction." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/13887.
Full textBaker, Peter John. "Applied Bayesian modelling in genetics." Thesis, Queensland University of Technology, 2001.
Find full textJaradat, Shatha. "OLLDA: Dynamic and Scalable Topic Modelling for Twitter : AN ONLINE SUPERVISED LATENT DIRICHLET ALLOCATION ALGORITHM." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177535.
Full textTillhandahålla högkvalitativa ämnen slutsats i dagens stora och dynamiska korpusar, såsom Twitter, är en utmanande uppgift. Detta är särskilt utmanande med tanke på att innehållet i den här miljön innehåller korta texter och många förkortningar. Projektet föreslår en förbättring med en populär online ämnen modellering algoritm för Latent Dirichlet Tilldelning (LDA), genom att införliva tillsyn för att göra den lämplig för Twitter sammanhang. Denna förbättring motiveras av behovet av en enda algoritm som uppnår båda målen: analysera stora mängder av dokument, inklusive nya dokument som anländer i en bäck, och samtidigt uppnå hög kvalitet på ämnen "upptäckt i speciella fall miljöer, till exempel som Twitter. Den föreslagna algoritmen är en kombination av en online-algoritm för LDA och en övervakad variant av LDA - Labeled LDA. Prestanda och kvalitet av den föreslagna algoritmen jämförs med dessa två algoritmer. Resultaten visar att den föreslagna algoritmen har visat bättre prestanda och kvalitet i jämförelse med den övervakade varianten av LDA, och det uppnådde bättre resultat i fråga om kvalitet i jämförelse med den online-algoritmen. Dessa förbättringar gör vår algoritm till ett attraktivt alternativ när de tillämpas på dynamiska miljöer, som Twitter. En miljö för att analysera och märkning uppgifter är utformad för att förbereda dataset innan du utför experimenten. Möjliga användningsområden för den föreslagna algoritmen är tweets rekommendation och trender upptäckt.
Wang, David I.-Chung. "Speaker diarization : "who spoke when"." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/59624/1/David_Wang_Thesis.pdf.
Full textCrawford, B. "Modelling the capillary extrusion of silicone rubber bases." Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.398144.
Full textGensler, Sonja Skiera Bernd. "Heterogenität in der Präferenzanalyse : ein Vergleich von hierarchischen Bayes-Modellen und Finite-Mixture-Modellen /." Wiesbaden : Dt. Univ.-Verl, 2003. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=010431500&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textLi, Tian Siong. "Modelling and kinetics estimation in gibbsite precipitation from caustic aluminate solutions." Thesis, Curtin University, 2000. http://hdl.handle.net/20.500.11937/1103.
Full textGonzález, González Larry Javier. "Modelling dynamics of RDF graphs with formal concept analysis." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168144.
Full textLa Web Semántica es una red de datos organizados de tal manera que permite su manipulación directa tanto por humanos como por computadoras. RDF es el framework recomendado por W3C para representar información en la Web Semántica. RDF usa un modelo de datos basado en grafos que no requiere ningún esquema fijo, provocando que los grafos RDF sean fáciles de extender e integrar, pero también difíciles de consultar, entender, explorar, resumir, etc. En esta tesis, inspirados en formal concept analysis (un subcampo de las matemáticas aplicadas, basados en la formalización de conceptos y jerarquı́as conceptuales, llamadas lattices) proponemos un data-driven schema para grandes y heterogéneos grafos RDF La idea principal es que si podemos definir un formal context a partir de un grafo RDF, entonces podemos extraer sus formal concepts y computar una lattice con ellos, lo que resulta en nuestra propuesta de esquema jerárquico para grafos RDF. Luego, proponemos un álgebra sobre tales lattices, que permite (1) calcular deltas entre dos lattices (por ejemplo, para resumir los cambios de una versión de un grafo a otro), y (2) sumar un delta a un lattice (por ejemplo, para proyectar cambios futuros). Mientras esta estructura (y su álgebra asociada) puede tener varias aplicaciones, nos centramos en el caso de uso de modelar y predecir el comportamiento dinámico de los grafos RDF. Evaluamos nuestros métodos al analizar cómo Wikidata ha cambiado durante 11 semanas. Primero extraemos los conjuntos de propiedades asociadas a entidades individuales de una manera escalable usando el framework MapReduce. Estos conjuntos de propiedades (también conocidos como characteristic sets) son anotados con sus entidades asociadas, y posteriormente, con su cardinalidad. En segundo lugar, proponemos un algoritmo para construir la lattice sobre los characteristic sets basados en la relación de subconjunto. Evaluamos la eficiencia y la escalabilidad de ambos procedimientos. Finalmente, usamos los métodos algebraicos para predecir cómo el esquema jerárquico de Wikidata evolucionaría. Contrastamos nuestros resultados con un modelo de regresión lineal como referencia. Nuestra propuesta supera al modelo lineal por un gran margen, llegando a obtener un root mean square error 12 veces más pequeño que el modelo de referencia. Concluimos que, basados en formal concept analysis, podemos definir y generar un esquema jerárquico a partir de un grafo RDF y que podemos usar esos esquemas para predecir cómo evolucionarán, en un alto nivel, estos grafos RDF en el tiempo.
Abidi, Amna. "Imperfect RDF Databases : From Modelling to Querying." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2019. http://www.theses.fr/2019ESMA0008/document.
Full textThe ever-increasing interest of RDF data on the Web has led to several and important research efforts to enrich traditional RDF data formalism for the exploitation and analysis purpose. The work of this thesis is a part of the continuation of those efforts by addressing the issue of RDF data management in presence of imperfection (untruthfulness, uncertainty, etc.). The main contributions of this dissertation are as follows. (1) We tackled the trusted RDF data model. Hence, we proposed to extend the skyline queries over trust RDF data, which consists in extracting the most interesting trusted resources according to user-defined criteria. (2) We studied via statistical methods the impact of the trust measure on the Trust-skyline set.(3) We integrated in the structure of RDF data (i.e., subject-property-object triple) a fourth element expressing a possibility measure to reflect the user opinion about the truth of a statement.To deal with possibility requirements, appropriate framework related to language is introduced, namely Pi-SPARQL, that extends SPARQL to be possibility-aware query language.Finally, we studied a new skyline operator variant to extract possibilistic RDF resources that are possibly dominated by no other resources in the sense of Pareto optimality
Naumann, Felix [Verfasser], and Markus [Akademischer Betreuer] Bühner. "Nonparametrische Bayes-Inferenz in mehrdimensionalen Item Response Modellen / Felix Naumann ; Betreuer: Markus Bühner." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/1174142766/34.
Full textChakroun, Chedlia. "Contribution à la définition d'une méthode de conception de bases de données à base ontologique." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00904117.
Full textLi, Tian Siong. "Modelling and kinetics estimation in gibbsite precipitation from caustic aluminate solutions." Curtin University of Technology, School of Applied Chemistry, 2000. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=13672.
Full textKinetics estimation study from repeated batch gibbsite precipitation data showed that the uncertainty in the experimental data coupled with the error incurred from the kinetic parameter estimation procedure used, resulted in large uncertainties in the kinetics estimates. The influences of the experimental design and the kinetics estimation technique on the accuracy and precision of estimates of the nucleation, growth and agglomeration kinetics for the gibbsite precipitation system were investigated. It was found that the operating conditions have a greater impact on the uncertainties in the estimated kinetics than does the precipitator configuration. The kinetics estimates from the integral method, i.e. non-linear parameter optimisation method, describe the gibbsite precipitation data better than those obtained by the differential method. However, both kinetics estimation techniques incurred significant uncertainties in the kinetics estimates, particularly toward the end of the precipitation runs where the kinetics rates are slow. The uncertainties in the kinetics estimates are strongly correlated to the magnitude of kinetics values and are dependent on the change in total crystal numbers and total crystal volume. Batch gibbsite precipitation data from an inhomogeneously-mixed precipitator were compared to a well-mixed precipitation system operated under the same operating conditions, i.e. supersaturation, seed charge, seed type, mean shear rate and temperature.
It was found that the gibbsite agglomeration kinetic estimates were significantly different, and hence, the product CSD, but the gibbsite growth rates were similar. It was also found that a compartmental model approach cannot fully account for the differences in suspension hydrodynamics, and resulted in unsatisfactorily CSD predictions of the inhomogeneously-mixed precipitator. This is attributed to the coupled effects of local energy dissipation rate and solids phase mixing on agglomeration process.
Schaberreiter, T. (Thomas). "A Bayesian network based on-line risk prediction framework for interdependent critical infrastructures." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526202129.
Full textTiivistelmä Tässä väitöskirjassa esitellään läpileikkausmalli kriittisten infrastruktuurien jatkuvaan käytön riskimallinnukseen. Tämän mallin avulla voidaan tiedottaa toisistaan riippuvaisia palveluita mahdollisista vaaroista, ja siten pysäyttää tai hidastaa toisiinsa vaikuttavat ja kumuloituvat vikaantumiset. Malli analysoi kriittisen infrastruktuurin palveluriskiä tutkimalla kriittisen infrastruktuuripalvelun tilan, joka on mitattu perusmittauksella (esimerkiksi anturi- tai ohjelmistotiloina) kriittisen infrastruktuurin palvelukomponenttien välillä ja tarkkailemalla koetun kriittisen infrastruktuurin palveluriskiä, joista palvelut riippuvat (kriittisen infrastruktuurin palveluriippuvuudet). Kriittisen infrastruktuurin palveluriski arvioidaan todennäköisyyden avulla käyttämällä Bayes-verkkoja. Lisäksi malli mahdollistaa tulevien riskien ennustamisen lyhyellä, keskipitkällä ja pitkällä aikavälillä, ja mahdollistaa niiden keskinäisten riippuvuuksien mallintamisen, joka on yleensä vaikea esittää Bayes-verkoissa. Kriittisen infrastruktuurin esittäminen kriittisen infrastruktuurin tietoturvamallina edellyttää analyysiä. Tässä väitöskirjassa esitellään kriittisen infrastruktuurin analyysimenetelmä, joka perustuu PROTOS-MATINE -riippuvuusanalyysimetodologiaan. Kriittiset infrastruktuurit esitetään kriittisen infrastruktuurin palveluina, palvelujen keskinäisinä riippuvuuksina ja perusmittauksina. Lisäksi tutkitaan varmuusindikaattoreita, joilla voidaan tutkia suoraan toiminnassa olevan kriittisen infrastruktuuripalvelun riskianalyysin oikeellisuutta, kuin myös riskiarvioita riippuvuuksista. Tutkimuksessa laadittiin työkalu, joka tukee kriittisen infrastruktuurin tietoturvamallin toteuttamisen kaikkia vaiheita. Kriittisen infrastruktuurin tietoturvamalli ja varmuusindikaattorien oikeellisuus vahvistettiin konseptitutkimuksella, ja alustavat tulokset osoittavat menetelmän toimivuuden
Kurzfassung In dieser Doktorarbeit wird ein Sektorübergreifendes Modell für die kontinuierliche Risikoabschätzung von kritische Infrastrukturen im laufenden Betrieb vorgestellt. Das Modell erlaubt es, Dienstleistungen, die in Abhängigkeit einer anderen Dienstleistung stehen, über mögliche Gefahren zu informieren und damit die Gefahr des Übergriffs von Risiken in andere Teile zu stoppen oder zu minimieren. Mit dem Modell können Gefahren in einer Dienstleistung anhand der Überwachung von kontinuierlichen Messungen (zum Beispiel Sensoren oder Softwarestatus) sowie der Überwachung von Gefahren in Dienstleistungen, die eine Abhängigkeit darstellen, analysiert werden. Die Abschätzung von Gefahren erfolgt probabilistisch mittels eines Bayessches Netzwerks. Zusätzlich erlaubt dieses Modell die Voraussage von zukünftigen Risiken in der kurzfristigen, mittelfristigen und langfristigen Zukunft und es erlaubt die Modellierung von gegenseitigen Abhängigkeiten, die im Allgemeinen schwer mit Bayesschen Netzwerken darzustellen sind. Um eine kritische Infrastruktur als ein solches Modell darzustellen, muss eine Analyse der kritischen Infrastruktur durchgeführt werden. In dieser Doktorarbeit wird diese Analyse durch die PROTOS-MATINE Methode zur Analyse von Abhängigkeiten unterstützt. Zusätzlich zu dem vorgestellten Modell wird in dieser Doktorarbeit eine Studie über Indikatoren, die das Vertrauen in die Genauigkeit einer Risikoabschätzung evaluieren können, vorgestellt. Die Studie beschäftigt sich sowohl mit der Evaluierung von Risikoabschätzungen innerhalb von Dienstleistungen als auch mit der Evaluierung von Risikoabschätzungen, die von Dienstleistungen erhalten wurden, die eine Abhängigkeiten darstellen. Eine Software, die alle Aspekte der Erstellung des vorgestellten Modells unterstützt, wurde entwickelt. Sowohl das präsentierte Modell zur Abschätzung von Risiken in kritischen Infrastrukturen als auch die Indikatoren zur Uberprüfung der Risikoabschätzungen wurden anhand einer Machbarkeitsstudie validiert. Erste Ergebnisse suggerieren die Anwendbarkeit dieser Konzepte auf kritische Infrastrukturen
Morgan, Robert E. "Modelling the explanatory bases of export intention : an investigation of non-exporting firms within the United Kingdom." Thesis, Cardiff University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389982.
Full textPérez-Guevara, Martín. "Bases neuronales de binding dans des représentations symboliques : exploration expérimentale et de modélisation." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB082/document.
Full textThe aim of this thesis is to understand how the brain computes and represents symbolic structures, such like those encountered in language or mathematics. The existence of parts in structures like morphemes, words and phrases has been established through decades of linguistic analysis and psycholinguistic experiments. Nonetheless the neural implementation of the operations that support the extreme combinatorial nature of language remains unsettled. Some basic composition operations that allow the stable internal representation of sensory objects in the sensory cortex, like hierarchical pattern recognition, receptive fields, pooling and normalization, have started to be understood[5]. But models of the binding operations required for construction of complex, possibly hierarchical, symbolic structures on which precise manipulation of its components is a requisite, lack empirical testing and are still unable to predict neuroimaging signals. In this sense, bridging the gap between experimental neuroimaging evidence and the available modelling solutions to the binding problem is a crucial step for the advancement of our understanding of the brain computation and representation of symbolic structures. From the recognition of this problem, the goal of this PhD became the identification and experimental test of the theories, based on neural networks, capable of dealing with symbolic structures, for which we could establish testable predictions against existing fMRI and ECoG neuroimaging measurements derived from language processing tasks. We identified two powerful but very different modelling approaches to the problem. The first is in the context of the tradition of Vectorial Symbolic Architectures (VSA) that bring precise mathematical modelling to the operations required to represent structures in the neural units of artificial neural networks and manipulate them. This is Smolensky’s formalism with tensor product representations (TPR)[10], which he demonstrates can encompass most of the previous work in VSA, like Synchronous Firing[9], Holographic Reduced Representations[8] and Recursive Auto-Associative Memories[1]. The second, is the Neural Blackboard Architecture (NBA) developed by Marc De Kamps and Van der Velde[11], that importantly differentiates itself by proposing an implementation of binding by process in circuits formed by neural assemblies of spiking neural networks. Instead of solving binding by assuming precise and particular algebraic operations on vectors, the NBA proposes the establishment of transient connectivity changes in a circuit structure of neural assemblies, such that the potential _ow of neural activity allowed by working memory mechanisms after a binding process takes place, implicitly represents symbolic structures. The first part of the thesis develops in more detail the theory behind each of these models and their relationship from the common perspective of solving the binding problem. Both models are capable of addressing most of the theoretical challenges posed currently for the neural modelling of symbolic structures, including those presented by Jackendo_[3]. Nonetheless they are very different, Smolenky’s TPR relies mostly on spatial static considerations of artificial neural units with explicit completely distributed and spatially stable representations implemented through vectors, while the NBA relies on temporal dynamic considerations of biologically based spiking neural units with implicit semi-local and spatially unstable representations implemented through neural assemblies. For the second part of the thesis, we identified the superposition principle, which consists on the addition of the neural activations of each of the sub-parts of a symbolic structure, as one of the most crucial assumptions of Smolensky’s TPR. (...)
Ramraj, Anitha. "Computational modelling of intermolecular interactions in bio, organic and nano molecules." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/computational-modelling-of-intermolecular-interactions-in-bio-organic-and-nano-molecules(7a41f3cd-1847-4ccf-8853-5fd8be2a2c15).html.
Full textTchouanguem, Djuedja Justine Flore. "Information modelling for the development of sustainable construction (MINDOC)." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0133.
Full textIn previous decades, controlling the environmental impact through lifecycle analysis has become a topical issue in the building sector. However, there are some problems when trying to exchange information between experts for conducting various studies like the environmental assessment of the building. There is also heterogeneity between construction product databases because they do not have the same characteristics and do not use the same basis to measure the environmental impact of each construction product. Moreover, there are still difficulties to exploit the full potential of linking BIM, SemanticWeb and databases of construction products because the idea of combining them is relatively recent. The goal of this thesis is to increase the flexibility needed to assess the building’s environmental impact in a timely manner. First, our research determines gaps in interoperability in the AEC (Architecture Engineering and Construction) domain. Then, we fill some of the shortcomings encountered in the formalization of building information and the generation of building data in Semantic Web formats. We further promote efficient use of BIM throughout the building life cycle by integrating and referencing environmental data on construction products into a BIM tool. Moreover, semantics has been improved by the enhancement of a well-known building-based ontology (namely ifcOWL for Industry Foundation Classes Web Ontology Language). Finally, we experience a case study of a small building for our methodology
Bertin, Benjamin. "Modélisation sémantique des bases de données d'inventaires en cycle de vie." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0049/document.
Full textEnvironmental impact assessment of goods and services is nowadays a major challenge for both economic and ethical reasons. Life Cycle Assessment provides a well accepted methodology for modeling environmental impacts of human activities. This methodology relies on the decomposition of a studied system into interdependent processes in a step called Life Cycle Inventory. Every process has several environmental impacts and the composition of those processes provides the cumulated environmental impact for the studied human activities. Several organizations provide processes databases containing several thousands of processes with their interdependency links that are used by LCA practitioners to do an LCA study. Understanding and audit of those databases requires to analyze a huge amount of processes and their dependency relations. But those databases can contain thousands of processes linked together. We identified two problems that the experts faces using those databases: - organize the processes and their dependency relations to improve the comprehensibility; - calculate the impacts and, if it is not possible, find why it is not feasible. In this thesis, we: - show that there are some semantic similarities between the processes and their dependency relations and propose a new way to model the dependency relations in an inventory database. In our approach, we semantically index the processes using an ontology and we use a multi-layers model of the dependency relations. We also study a declarative approach of this multi-layers approach; - propose a method to calculate the environmental impacts of the processes based on linear algebra and graph theory, and we study the conditions of the feasibility of this calculation when we have a cyclic model. We developed a prototype based on this approach that showed some convincing results on different use cases. We tested our prototype on a case study based on a data set extracted from the National Renewable Energy restricted to the electricity production in the United-States
Boyaval, Sébastien. "Mathematical modelling and numerical simulation in materials science." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00499254.
Full textMolléro, Roch. "Personnalisation robuste de modèles 3D électromécaniques du cœur. Application à des bases de données cliniques hétérogènes et longitudinales." Thesis, Côte d'Azur, 2017. http://www.theses.fr/2017AZUR4106/document.
Full textPersonalised cardiac modeling consists in creating virtual 3D simulations of real clinical cases to help clinicians predict the behaviour of the heart, or better understand some pathologies from the estimated values of biophysical parameters. In this work we first motivate the need for a consistent parameter estimation framework, from a case study were uncertainty in myocardial fibre orientation leads to an uncertainty in estimated parameters which is extremely large compared to their physiological variability. To build a consistent approach to parameter estimation, we then tackle the computational complexity of 3D models. We introduce an original multiscale 0D/3D approach for cardiac models, based on a multiscale coupling to approximate outputs of a 3D model with a reduced "0D" version of the same model. Then we derive from this coupling an efficient multifidelity optimisation algorithm for the 3D model. In a second step, we build more than 140 personalised 3D simulations, in the context of two studies involving the longitudinal analysis of the cardiac function: on one hand the analysis of long-term evolution of cardiomyopathies under therapy, on the other hand the modeling of short-term cardiovascular changes during digestion. Finally we present an algorithm to automatically detect and select observable directions in the parameter space from a set of measurements, and compute consistent population-based priors probabilities in these directions, which can be used to constrain parameter estimation for cases where measurements are missing. This enables consistent parameter estimations in a large databases of 811 cases with the 0D model, and 137 cases of the 3D model
Ouertatani, Latifa. "L'enseignement-apprentissage des acides et des bases en Tunisie : une étude transversale du lycée à la première année d'université." Thesis, Bordeaux 2, 2009. http://www.theses.fr/2009BOR24925/document.
Full textThis work consisted of a complete longitudinal study of the didactic transposition relative to the acids and base conceptual field from the upper-secondary school to the first university year in Tunisia. A literature review led to the formulation of the research questions and hypotheses and to the choice of a analysis theoretical framework, the anthropological theory of didactics of Chevallard and the link between phenomena and their modelling in the chemistry education. Having made a historical analysis of the reference knowledge construction we realized a study of the institutional relationship of the pupils/students with the objects of knowledge for the various levels of education. We found a lack of strictness in the use of the vocabulary and\or formalism sometimes leading to the presentation of hybrid models susceptible to induce pupils/ student’s alternative conceptions or to lead to difficulties of understanding. We put in evidence that the tasks and techniques which repeat during the various levels of education concern the pH calculations and titrations. In the taught knowledge we have identify some inaccuracies and some inadequacy which can be at the origin of certain difficulties or alternative conceptions. Concerning the evolution of the knowledge learnt further to the successive educations, we put in evidence that the education concerning the acid and base concepts from the second year of upper-secondary school (grade 10) to the first university year leads gradually to the passage of a "phenomenological model" to a "symbolic model", then to a " pithy formula model", but little to the integration of the Bronsted scientific model. We also showed that it is in the difficulty to linking the three registers of chemistry, macroscopic, microscopic and symbolic, that lives the main difficulties encountered by pupils/students and their tendency to have recourse to use alternative reasoning’s. Besides, the conceptions and the identified alternative reasoning’s seem due to a lack of strictness in the presentation of the taught knowledge contents. Finally, if we consider the conceptual evolution during the secondary school-university transition, we put in evidence that this transition does not perform the favourable conditions to a conceptual analysis of the knowledge’s seen in grade 12, it allows, at the most, to their another appropriation by some. Propositions for improve this evolution were formulated
Falk, Matthew Gregory. "Incorporating uncertainty in environmental models informed by imagery." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/33235/1/Matthew_Falk_Thesis.pdf.
Full textGamet, Arnaud. "Etude et mise en oeuvre de transitions passives aux interfaces circuit/boîtier pour les bases de temps intégrées résonantes." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0002.
Full textNowadays, the integration of oscillators into microcontrollers is a major industrial challenge which involves a large competition between the main actors of this market. Indeed, sine wave oscillators are essential circuits, and are fore the most part based on external crystal or MEMs resonators. More and more investigations are carried out in order to integrate the resonant structure into the package, and avoid all external constraints able to restrict the performances of the oscillator. With this in mind, we studied in this work the electrical behavior, in particular the inductive behavior of bond wires which are electrical connections between a die and its package. The main advantage to use this type of component is its low cost of manufacturing. This passive component has been characterized using several measurement tools on a wide range of frequencies. A RLC model has been presented, allowing analogue designers to use an electrical equivalent circuit in standard CMOS technology. The integration of the passive component in a resonant cell has been demonstrated in a prototype
Karlovets, Ekaterina. "Spectroscopie d'absorption à très haute sensitivité de différents isotopologues du dioxyde de carbone." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENY027/document.
Full textThis thesis is devoted to the investigation of the high resolution near infrared spectra of carbon dioxide and includes experimental measurements and theoretical modeling of line positions and intensities and refinement and extension of the set of effective operator parameters. The obtained results can be divided by three parts:In the first part, we present the equations for the q0 J, qJ, q2J and q3J-types parameters of the matrix elements of the effective dipole-moment operator in terms of the dipole-moment derivatives and force field constants derived by means of contact transformation method for the following carbon dioxide isotopologues: 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 17O12C18O and 17O13C18O. Using these equations and the obtained isotopic relations for the molecular constants, we derived the effective dipole-moment parameters for the ∆P= 0, 2, 4, 6 and 8 series of transitions of the six above asymmetric carbon dioxide isotopologues (P=2V1+V2+3V3 is the polyad number where V1,V2 and V3 are the vibrational quantum numbers). The comparison of the parameters reported in the literature and obtained in this work is performed and discussed.The second part is devoted to the analysis of the room temperature absorption spectrum of highly 18O enriched carbon dioxide recorded by very high sensitivity CW-Cavity Ring Down Spectroscopy between 5851 and 6990 cm-1 (1.71-1.43 µm ). Overall, 19526 transitions belonging to eleven isotopologues (12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O, 16O13C17O, 12C18O2, 17O12C18O, 12C17O2, 13C18O2 and 17O13C18O) were assigned on the basis of the predictions of the effective Hamiltonian model. Line intensities of the weakest transitions are on the order of 2×10-29 cm/molecule. The line positions were determined with accuracy better than 1×10-3 cm-1 while the absolute line intensities are reported with an uncertainty better than 10%. All the identified bands correspond to the ∆P= 8, 9 and 10 series of transitions. The accurate spectroscopic parameters for a total of 211 bands belonging to nine isotopologues were derived. Nine resonance perturbations of the upper state rotational structure were identified for 16O12C18O, 12C18O2, 13C18O2, 16O13C18O, 16O12C17O and 17O12C18O isotopologues. New sets of Hamiltonian parameters have been obtained by the global modeling of the line positions within the effective Hamiltonian approach. Using a similar approach, the global fits of the obtained intensity values of the ∆P= 8, 9 and 10 series of transitions were used to derive the corresponding set of effective dipole moment parameters.In the third part, we report the analysis of the absorption spectrum of natural carbon dioxide by high sensitivity CW-Cavity Ring Down spectroscopy between 7909 and 8370 cm-1 (1.26-1.19 µm). Overall, 3425 transitions belonging to 61 bands of 12C16O2, 13C16O2, 16O12C18O, 16O12C17O, 16O13C18O and 16O13C17O were assigned. In the studied spectral region, all bands correspond to ∆P= 11 series of transitions. The accurate spectroscopic parameters of the upper states of 57 bands were derived from a fit of the measured line positions (typical rms deviations of about 0.6×10-3 cm-1). The global fits of the obtained intensity values of the ∆P= 11 series of transitions were used to determine the corresponding set of effective dipole moment parameters of the six studied isotopologues.The large set of new observations obtained in this thesis has an important impact on the global modeling of high resolution spectra of carbon dioxide. It has allowed refining and extending the sets of effective dipole moment and effective Hamiltonian parameters. The obtained results have allowed improving importantly the quality of the line positions and intensities in the most currently used spectroscopic databases of carbon dioxide (HITRAN, GEISA, CDSD)
Oshurko, Ievgeniia. "Knowledge representation and curation in hierarchies of graphs." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN024.
Full textThe task of automatically extracting insights or building computational models fromknowledge on complex systems greatly relies on the choice of appropriate representation.This work makes an effort towards building a framework suitable for representation offragmented knowledge on complex systems and its semi-automated curation---continuouscollation, integration, annotation and revision.We propose a knowledge representation system based on hierarchies of graphs relatedwith graph homomorphisms. Individual graphs situated in such hierarchies representdistinct fragments of knowledge and the homomorphisms allow relating these fragments.Their graphical structure can be used efficiently to express entities and their relations. Wefocus on the design of mathematical mechanisms, based on algebraic approaches to graphrewriting, for transformation of individual graphs in hierarchies that maintain consistentrelations between them. Such mechanisms provide a transparent audit trail, as well as aninfrastructure for maintaining multiple versions of knowledge.We describe how the developed theory can be used for building schema-aware graphdatabases that provide schema-data co-evolution capabilities. The proposed knowledgerepresentation framework is used to build the KAMI (Knowledge Aggregation and ModelInstantiation) framework for curation of cellular signalling knowledge. The frameworkallows for semi-automated aggregation of individual facts on protein-protein interactionsinto knowledge corpora, reuse of this knowledge for instantiation of signalling models indifferent cellular contexts and generation of executable rule-based models
Pham, thi Tam ngoc. "Caractérisation et modélisation du comportement thermodynamique du combustible RNR-Na sous irradiation." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4044/document.
Full textFor a burn-up higher than 7 at%, the volatile FP like Cs, I and Te or metallic (Mo) are partially released from the fuel pellet in order to form a layer of compounds between the outer surface of the fuel and the inner surface of the stainless cladding. This layer is called the JOG, french acronym for Joint-Oxyde-Gaine.My subject is focused on two topics: the thermodynamic study of the (Cs-I-Te-Mo-O) system and the migration of those FP towards the gap to form the JOG.The thermodynamic study was the first step of my work. On the basis of critical literature survey, the following systems have been optimized by the CALPHAD method: Cs-Te, Cs-I and Cs-Mo-O. In parallel, an experimental study is undertaken in order to validate our CALPHAD modelling of the Cs-Te system. In a second step, the thermodynamic data coming from the CALPHAD modelling have been introduced into the database that we use with the thermochemical computation code ANGE (CEA code derived from the SOLGASMIX software) in order to calculate the chemical composition of the irradiated fuel versus burn-up and temperature. In a third and last step, the thermochemical computation code ANGE (Advanced Numeric Gibbs Energy minimizer) has been coupled with the fuel performance code GERMINAL V2, which simulates the thermo-mechanical behavior of SFR fuel
Vandi, Matteo. "Valutazione della sicurezza e progettazione di interventi di adeguamento statico e funzionale del cavalcaferrovia di Strada dell’Alpo a Verona." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.
Find full textPlatanakis, Emmanouil, and C. Sutcliffe. "Asset-liability modelling and pension schemes: the application of robust optimization to USS." 2015. http://hdl.handle.net/10454/8146.
Full textThis paper uses a novel numerical optimization technique – robust optimization – that is well suited to solving the asset–liability management (ALM) problem for pension schemes. It requires the estimation of fewer stochastic parameters, reduces estimation risk and adopts a prudent approach to asset allocation. This study is the first to apply it to a real-world pension scheme, and the first ALM model of a pension scheme to maximize the Sharpe ratio. We disaggregate pension liabilities into three components – active members, deferred members and pensioners, and transform the optimal asset allocation into the scheme’s projected contribution rate. The robust optimization model is extended to include liabilities and used to derive optimal investment policies for the Universities Superannuation Scheme (USS), benchmarked against the Sharpe and Tint, Bayes–Stein and Black–Litterman models as well as the actual USS investment decisions. Over a 144-month out-of-sample period, robust optimization is superior to the four benchmarks across 20 performance criteria and has a remarkably stable asset allocation – essentially fix-mix. These conclusions are supported by six robustness checks.
Schmidt, Philip J. "Addressing the Uncertainty Due to Random Measurement Errors in Quantitative Analysis of Microorganism and Discrete Particle Enumeration Data." Thesis, 2010. http://hdl.handle.net/10012/5596.
Full textMokobane, Reshoketswe. "Application of small area estimation techniques in modelling accessibility of water, sanitation and electricity in South Africa : the case of Capricorn District." Thesis, 2019. http://hdl.handle.net/10386/2945.
Full textThis study presents the application of Direct and Indirect methods of Small AreaEstimation(SAE)techniques. Thestudyisaimedatestimatingthetrends and the proportions of households accessing water, sanitation, and electricity for lighting at small areas of the Limpopo Province, South Africa. The study modified Statistics South Africa’s General Household Survey series 2009-2015 and Census 2011 data. The option categories of three variables: Water, Sanitation and Electricity for lighting, were re-coded. Empirical Bayes and Hierarchical Bayes models known as Markov Chain Monte Carlo (MCMC) methods were used to refine estimates in SAS. The Census 2011 data aggregated in ‘Supercross’ was used to validate the results obtained from the models. The SAE methods were applied to account for the census undercoverage counts and rates. It was found that the electricity services were more prioritised than water and sanitation in the Capricorn District of the Limpopo Province. The greatest challenge, however, lies with the poor provision of sanitation services in the country, particularly in the small rural areas. The key point is to suggestpolicyconsiderationstotheSouthAfricangovernmentforfutureequitable provisioning of water, sanitation and electricity services across the country.
Zhang, Xiaohe. "Exploring sediment dynamics in coastal bays by numerical modelling and remote sensing." Thesis, 2020. https://hdl.handle.net/2144/42052.
Full textCordeiro, Margarida Machado. "Permeation of Weak Acids and Bases Through Lipid Bilayers – Modelling and Validation of a pH Variation Assay." Master's thesis, 2022. http://hdl.handle.net/10316/99393.
Full textA descoberta e desenvolvimento de fármacos é um processo iterativo e muito complexo. A insuficiente absorção, distribuição, eliminação, eficácia e segurança dos candidatos a fármacos são os principais obstáculos no desenvolvimento de novas terapias. As membranas lipídicas são a principal barreira à difusão dos solutos e determinam a disponibilidade destes compostos nos tecidos. Prever a velocidade de permeação de solutos in vivo é crucial, e existem vários estudos in vitro para entender e quantificar esse processo. O ensaio de variação de pH é particularmente relevante porque permite seguir a permeação de ácidos e bases fracas, mesmo quando estes não apresentam propriedades óticas. No entanto, existem alguns artefactos, a validade deste ensaio não é amplamente aceite e os coeficientes de permeabilidade nem sempre são consistentes com aqueles obtidos por outros métodos.Neste trabalho foi desenvolvido um modelo cinético para a permeação de ácidos e bases fracos através de membranas lipídicas que considera explicitamente os dois folhetos da membrana. As simulações desses processos permitiram identificar alguns princípios do desenho experimental necessários para não comprometer a precisão do método na previsão dos coeficientes de permeabilidade. Devem ser utilizadas vesículas lipídicas de grandes dimensões e a variação de pH deve ser inferior a 0.25 unidades. Estas conclusões resultaram da análise do efeito da topologia do sistema, da lipofilicidade do soluto e das concentrações do soluto e da sonda de fluorescência nos números de ocupação por vesícula e da comparação da dinâmica de permeação do soluto e da variação da fluorescência. Ao analisar o efeito destes parâmetros no coeficiente de permeabilidade, verificou-se que a equação comummente utilizada Papp = β × r/3 é inadequada para avaliar o coeficiente de permeabilidade de ácidos e bases fracas. Isso resulta do facto de vários pressupostos e aproximações considerados na derivação desta equação não serem válidos nas condições do ensaio.Este trabalho também se focou na análise do efeito de vários parâmetros (constante de velocidade de translocação, pKa do soluto, permeabilidades de protão e do potássio) na cinética de permeação do soluto e na variação do pH interno resultante. A permeação de ácidos fracos resulta numa rápida diminuição do pH, seguida de uma recuperação mais lenta do seu valor inicial. Na permeação de bases fracas é observado um efeito simétrico. Se apenas a espécie neutra permear a membrana, a dinâmica do soluto é bem descrita por uma função monoexponencial. No entanto, se a permeação das espécies carregadas for incluída (ainda que num processo mais lento), a acumulação de soluto pode seguir uma cinética bifásica. Neste caso, a permeabilidade aparente do soluto deve ser calculada a partir de uma constante característica média (α1 β1 + α2 β2). Porém, não é possível calculá-la com precisão a partir da dinâmica de fluorescência, uma vez que não existe uma relação direta entre as constantes características e os termos pré-exponenciais. Usar apenas a constante característica do processo rápido resultará numa sobrestimação do coeficiente de permeabilidade ao soluto. A fase lenta da permeação do soluto não é influenciada apenas pela permeabilidade das espécies de soluto carregadas, mas também pela permeabilidade de outras espécies carregadas em solução como H+/OH‒ e os outros iões responsáveis pela dissipação do potencial eletrostático, gerado pelo desequilíbrio de carga.Foram realizadas algumas experiências de equilíbrio de pH para estimar a permeabilidade dos iões H+/OH‒ e avaliar o efeito da valinomicina, um ionóforo com alta especificidade para K+. No entanto, estes objetivos não foram alcançados com sucesso, uma vez que os resultados experimentais obtidos eram bastante diferentes das variações previstas pelo nosso modelo cinético. Concluiu-se que as discrepâncias se devem principalmente à capacidade tampão de pH adicional presente no interior das vesículas, possivelmente devido à presença de ácido carbónico. O aumento da capacidade tampão resulta na necessidade de permeação de uma maior quantidade de iões H+/OH‒ para reestabelecer o equilíbrio de pH, o que, por sua vez, leva ao desenvolvimento de um maior desequilíbrio de cargas entre os meios aquosos externo e interno das vesículas. Assim, o potencial eletrostático gerado opõe-se ao movimento dos iões H+/OH‒ e impede o reequilibrar do pH. O completo reequilíbrio requer o movimento adicional de cargas, como K+ na presença de valinomicina, o que explica o forte efeito da valinomicina observado experimentalmente.
Drug discovery and development is an iterative and very complex process. The poor absorption, distribution, clearance, efficiency, and safety of drug candidates are the major pitfall in the development of new therapies. Lipid membranes represent the main barrier to the free diffusion of solutes and determine the availability of these compounds in the tissues. Predicting the rate at which solutes permeate in vivo barriers is crucial, and there are several in vitro studies valuable for this goal. The pH-variation assay is particularly relevant because it allows following the permeation of weak acids and bases even when they do not exhibit optical properties. However, there are some artefacts, its validity is not widely accepted, and the permeability coefficients are not always consistent with those from other methods.In this work, a kinetic model was developed for the permeation of weak acids and bases through lipid membrane barriers that considers explicitly the two membrane leaflets. The simulations of these processes were able to identify some experiment design principles to not compromise the accuracy of the method in the prediction of permeability coefficients. The assay must be employed with larger vesicles, and the pH variation must be under 0.25 units. These conclusions were achieved by analysing the effect of the topology of the system, solute lipophilicity, and solute and fluorescent pH probe concentrations on the occupancy numbers per vesicle and by comparing the dynamics of solute accumulation and fluorescence variation. When analysing the effect of these parameters on the permeability coefficient it was found that the widely used equation Papp = β × r/3 is inappropriate to assess the permeability coefficient of drug-like weak acids and bases. This results from the failure of several assumptions and approximations considered in the derivation of this equation.This work also examined the effect of several parameters (flip-flop rate constant, solute’s pKa, proton, and potassium permeabilities) on the kinetics of solute permeation and the resulting pH variation inside the vesicles. The permeation of weak acids leads to a fast decrease of the pH, which is followed by a slow recovery to the initial pH value, and a symmetric effect is observed for the permeation of weak bases. If only the neutral solute species may permeate the membrane, the solute equilibration is well described by a mono-exponential function. However, if permeation of charged species is included (albeit as a slower process), the accumulation of solute may follow a biphasic kinetics. In this case, the solute apparent permeability should be calculated from a weighted characteristic constant (α1 β1 + α2 β2). However, when using the fluorescence dynamics, this is not possible to perform accurately due to a non-direct relationship between the characteristic constants and pre-exponential terms. When using only the characteristic constants of the fast process, the solute permeability coefficient is overestimated. It was observed that the slow phase in solute accumulation is not influenced only by the permeability of the charged solute species, but also by the permeability of other charged species in solution such as H+/OH‒ and the ions responsible for the dissipation of electrostatic potentials generated by charge unbalance.Some pH equilibration experiments were performed to estimate the permeability of H+/OH‒ and assess the effect of valinomycin, an ionophore with high specificity for K+. However, our objectives were not successfully achieved as the experimental results obtained were quite different from the time courses predicted by our kinetic model. We concluded that the main reason for the discrepancies was the additional pH buffer capacity present inside the vesicles, possibly due to the presence of carbonic acid. The increased buffer capacity leads to a higher amount of H+/OH‒ required to achieve pH equilibration, which in turn leads to the development of a larger charge unbalance between the aqueous media inside and outside the vesicles. The electrostatic potential thus generated hinders the movement of additional H+/OH‒ and prevents pH equalisation. The full equalization requires the countermovement of additional charges, such as K+ in the presence of valinomycin, which explains the strong effect of valinomycin observed experimentally.
FCT
FCT
FCT
Link, Roman Mathias. "The role of tree height and wood density for the water use, productivity and hydraulic architecture of tropical trees." Thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-13EF-9.
Full textAlmeida, António Manuel Galinho Pires de. "Modelo de sistemas de informação técnica baseado numa plataforma SIG." Master's thesis, 2006. http://hdl.handle.net/10362/3642.
Full textO presente trabalho pretende desenvolver um modelo conceptual de um Sistema de Informação Técnica (SIT) baseado numa plataforma SIG, aplicado à Industria, mais especificamente à rede eléctrica de uma fábrica, apresentando ao mesmo tempo a metodologia a seguir na integração do modelo numa organização, e as vantagens que uma ferramenta como esta poderá proporcionar. O modelo conceptual do SIT começará por ser especificado e documentado em linguagem UML, tendo-se identificado neste processo, dois subsistemas na sua constituição, que serão posteriormente transpostos para uma plataforma SIG e para uma plataforma SGBD relacional, tendo-se recorrido para o efeito, ao modelo entidade-atributo-relação (EAR) de [CHEN, 1976] e às regras de transposição de [BENNET et al., 1999]. Concluída a transposição do modelo para as plataformas SIG e SGBD, realizaram-se simulações da sua aplicabilidade a uma grande organização, mais concretamente à VWAutoeuropa, empresa seleccionada para o estudo de caso. As simulações contemplaram os três tipos de análise suportados pelo SIT, nomeadamente, análise de problemas rotineiros de localização de equipamentos, análise de problemas com recurso à integração de informação de outros sistemas de informação, como o SAP e o Sistema de Gestão de Energia (SGE) e análise de problemas complexos com recurso a operações de geoprocessamento, em que neste caso o (SIT) pode ser encarado como um sistema de apoio à decisão. O modelo criado deixa antever que existe a possibilidade de expansão a outros tipos de infraestruturas, nomeadamente às redes de água, saneamento, gás e informática. O tipo de abordagem que foi feita ao longo da presente dissertação, através da inclusão de vários tipos de modelos, tornam esta dissertação numa espécie de Guideline a utilizar na integração de SIG’s ou outros Sistemas de Informação em organizações.