Dissertations / Theses on the topic 'Rule modelling'

To see the other types of publications on this topic, follow the link: Rule modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Rule modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Honorato-Zimmer, Ricardo. "On a thermodynamic approach to biomolecular interaction networks." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28904.

Full text
Abstract:
We explore the direct and inverse problem of thermodynamics in the context of rule-based modelling. The direct problem can be concisely stated as obtaining a set of rewriting rules and their rates from the description of the energy landscape such that their asymptotic behaviour when t → ∞ coincide. To tackle this problem, we describe an energy function as a finite set of connected patterns P and an energy cost function e which associates real values to each of these energy patterns. We use a finite set of reversible graph rewriting rules G to define the qualitative dynamics by showing which transformations are possible. Given G and P, we construct a finite set of rules Gp which i) has the same qualitative transition system as G and ii) when equipped with rates according to e, defines a continuous-time Markov chain that has detailed balance with respect to the invariant probability distribution determined by the energy function. The construction relies on a technique for rule refinement described in earlier work and allows us to represent thermodynamically consistent models of biochemical interaction networks in a concise manner. The inverse problem, on the other hand, is to i) check whether a rule-based model has an energy function that describes its asymptotic behaviour and if so ii) obtain the energy function from the graph rewriting rules and their rates. Although this problem is known to be undecidable in the general case, we find two suitable subsets of Kappa, our rule-based modelling framework of choice, were this question can be answer positively and the form of their energy functions described analytically.
APA, Harvard, Vancouver, ISO, and other styles
2

McIntosh, Brian S. "Rule-based modelling of vegetation dynamics." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/12619.

Full text
Abstract:
The corpus of available vegetation knowledge is characterised by its fragmented form and by the way in which relationships between different ecological quantities tend to be expressed non-quantitatively. Much of the corpus is only held informally and composed of deterministic factual or conditional statements. Despite its form, this thesis demonstrates that available ecological knowledge can be usefully employed for predictive modelling of vegetation dynamics under different conditions. The thesis concentrates on modelling Mediterranean vegetation dynamics. Using a mixture of concepts and techniques from deterministic state transition and functional attributes modelling. Qualitative Reasoning and knowledge-based systems, three ontological distinct modelling systems are developed to demonstrate the utility of available knowledge for modelling vegetation dynamics. All three systems use declarative, rule-based approaches based on first-order logic and are composed of a set of representational constructs along with a separate system for reasoning with these constructs to make predictions. A method for reasoning about change in non-quantitative model variables is developed based upon time and direction of change. This ‘temporal reasoning system’ provides a solution to the state variable problem and may offer a general way of modelling with non-quantitative knowledge. To illustrate, a different model of Mediterranean vegetation dynamics is developed and run under different conditions for each system. The capabilities and possible problems of each system in terms of ecological validity, knowledge representation and reasoning are discussed. The general utility of rule-based approaches to modelling vegetation dynamics are also discussed along with the implications of the modelling systems developed for the activities of decision-support and ecological theory development.
APA, Harvard, Vancouver, ISO, and other styles
3

Arabikhan, Farzad. "Telecommuting choice modelling using fuzzy rule based networks." Thesis, University of Portsmouth, 2017. https://researchportal.port.ac.uk/portal/en/theses/telecommuting-choice-modelling-using-fuzzy-rule-based-networks(b088b779-8daa-441e-b0a0-7c9641e1f08b).html.

Full text
Abstract:
Telecommuting as an approach in transportation demand management has made the news a lot in recent years. Technology has enabled this growing trend, and more and more companies and families are taking advantages of it. Adopting telecommuting is a multidimensional decision making process that involves different aspects of life such as family, work and many more. Modelling telecommuting enables employers and employees to understand the main factors that influence on decision making about adopting telecommuting. The role of subjective knowledge and linguistic variables cannot be ignored in human decision making process and Fuzzy Logic has proved to be a powerful tool for knowledge-based decision-making systems. Telecommuting as a multifaceted decision involves more on subjective knowledge rather that accurate numbers. Thus, fuzzy logic is applied for modelling telecommuting. Moreover, the complex internal decision making process for adopting telecommuting reveals the role of various factors at different levels that influence on the outcome of the decision. Therefore, Fuzzy Rule Based Network, as a novel approach in modelling complex systems, is utilised. Using Fuzzy Network as a transparent approach, enables to understand the role of external inputs, intermediate variables and their interaction in modelling telecommuting. According to choice theory and in order to find the maximum utilities of alternatives in telecommuting, the Fuzzy Network is tuned and optimised in terms of rules and membership function using Genetic Algorithm and Fuzzy c-mean clustering method. In addition, to reduce the size of Fuzzy Network, an input and branch selection method is proposed. Linguistic composition of the nodes in Fuzzy Network is also performed by an efficient method to reduce computational costs. Results highlight the most important external and intermediate variables as well as decision rules in describing the suitability of telecommuting. Also, a Multinomial Logit model, as benchmark model, is developed to compare models performances which shows the superiority of the proposed method in transparency, efficiency and interpretability criteria. The main contributions of this research can be highlighted in modelling the suitability of telecommuting using Fuzzy Rule Based Network, developing fuzzy utility model using Fuzzy Rule Based Network, tuning Fuzzy Rule Based Network using Genetic Algorithm, input and branch selection for Fuzzy Rule Based Network and finally proposing an efficient method for linguistic composition of Rule Based Network.
APA, Harvard, Vancouver, ISO, and other styles
4

Bhanthumnavin, Kanyarat. "Macroeconomic modelling and monetary policy rule optimization for Thailand." Thesis, University of Oxford, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Livesey, Gillian Elizabeth. "Advancing egress complexity in support of rule-based evacuation modelling." Thesis, University of Ulster, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288821.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wijesekera, Dhammika Harindra, and n/a. "A form based meta-schema for information and knowledge elicitation." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20060904.123024.

Full text
Abstract:
Knowledge is considered important for the survival and growth of an enterprise. Currently knowledge is stored in various places including the bottom drawers of employees. The human being is considered to be the most important knowledge provider. Over the years knowledge based systems (KBS) have been developed to capture and nurture the knowledge of domain experts. However, such systems were considered to be separate and different from the traditional information systems development. Many KBS development projects have failed. The main causes for such failures have been recognised as the difficulties associated with the process of knowledge elicitation, in particular the techniques and methods employed. On the other hand, the main emphasis of information systems development has been in the areas of data and information capture relating to transaction based systems. For knowledge to be effectively captured and nurtured it is necessary for knowledge to be part of the information systems development activity. This thesis reports on a process of investigation and analysis conducted into the areas of information, knowledge and the overlapping areas. This research advocates a hybrid approach, where knowledge and information capture to be considered as one in a unified environment. A meta-schema design based on Formal Object Role Modelling (FORM), independent of implementation details, is introduced for this purpose. This is considered to be a key contribution of this research activity. Both information and knowledge is expected to be captured through this approach. Meta data types are provided for the capture of business rules and they form part of the knowledge base of an organisation. The integration of knowledge with data and information is also described. XML is recognised by many as the preferred data interchange language and it is investigated for the purpose of rule interchange. This approach is expected to enable organisations to interchange business rules and their meta-data, in addition to data and their schema. During interchange rules can be interpreted and applied by receiving systems, thus providing a basis for intelligent behaviour. With the emergence of new technologies such as the Internet the modelling of an enterprise as a series of business processes has gained prominence. Enterprises are moving towards integration, establishing well-described business processes within and across enterprises, to include their customers and suppliers. The purpose is to derive a common set of objectives and benefit from potential economic efficiencies. The suggested meta-schema design can be used in the early phases of requirements elicitation to specify, communicate, comprehend and refine various artefacts. This is expected to encourage domain experts and knowledge analysts work towards describing each business process and their interactions. Existing business processes can be documented and business efficiencies can be achieved through a process of refinement. The meta-schema design allows for a ?systems view? and sharing of such views, thus enabling domain experts to focus on their area of specialisation whilst having an understanding of other business areas and their facts. The design also allows for synchronisation of mental models of experts and the knowledge analyst. This has been a major issue with KBS development and one of the main reasons for the failure of such projects. The intention of this research is to provide a facility to overcome this issue. The natural language based FORM encourages verbalisation of the domain, hence increasing the understanding and comprehension of available business facts.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson-Kanamori, John Roger. "Defining complex rule-based models in space and over time." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/11687.

Full text
Abstract:
Computational biology seeks to understand complex spatio-temporal phenomena across multiple levels of structural and functional organisation. However, questions raised in this context are difficult to answer without modelling methodologies that are intuitive and approachable for non-expert users. Stochastic rule-based modelling languages such as Kappa have been the focus of recent attention in developing complex biological models that are nevertheless concise, comprehensible, and easily extensible. We look at further developing Kappa, in terms of how we might define complex models in both the spatial and the temporal axes. In defining complex models in space, we address the assumption that the reaction mixture of a Kappa model is homogeneous and well-mixed. We propose evolutions of the current iteration of Spatial Kappa to streamline the process of defining spatial structures for different modelling purposes. We also verify the existing implementation against established results in diffusion and narrow escape, thus laying the foundations for querying a wider range of spatial systems with greater confidence in the accuracy of the results. In defining complex models over time, we draw attention to how non-modelling specialists might define, verify, and analyse rules throughout a rigorous model development process. We propose structured visual methodologies for developing and maintaining knowledge base data structures, incorporating the information needed to construct a Kappa rule-based model. We further extend these methodologies to deal with biological systems defined by the activity of synthetic genetic parts, with the hope of providing tractable operations that allow multiple users to contribute to their development over time according to their area of expertise. Throughout the thesis we pursue the aim of bridging the divide between information sources such as literature and bioinformatics databases and the abstracting decisions inherent in a model. We consider methodologies for automating the construction of spatial models, providing traceable links from source to model element, and updating a model via an iterative and collaborative development process. By providing frameworks for modellers from multiple domains of expertise to work with the language, we reduce the entry barrier and open the field to further questions and new research.
APA, Harvard, Vancouver, ISO, and other styles
8

Costa, Jutglar Gonçal. "Integration of building product data with BIM modelling: a semantic-based product catalogue and rule checking system." Doctoral thesis, Universitat Ramon Llull, 2017. http://hdl.handle.net/10803/450865.

Full text
Abstract:
En la indústria AEC (Arquitectura, Enginyeria, Construcció), és cada vegada més necessari automatitzar l’intercanvi d'informació en els processos en els quals intervé la tecnologia BIM (Building Information Modelling). Els experts que participen en aquests processos (arquitectes, enginyers, constructors, etc.) utilitzen diferents tipus d’aplicacions per dur a terme tasques específiques d’acord al seu àmbit de coneixement i la seva responsabilitat. Tot i que cada una d’aquestes aplicacions, separadament, compleix la seva funció, la interoperabilitat entre elles segueix sent un problema a resoldre. En aquests processos es requereix, a més, accedir a dades de fonts diverses i diferents formats, per integrar-los i fer-los accessibles a les aplicacions BIM. En aquesta tesi s’investiguen les dificultats subjacents en aquests dos problemes –la interoperabilitat entre aplicacions i la integració d’informació de múltiple fonts i formats en el context dels processos basats en tecnologies BIM– i es proposen solucions per superar-les. En primer lloc s’han examinat les ineficiències que actualment existeixen en l’intercanvi d’informació entre sistemes i aplicacions utilitzats en projectes AEC que empren la tecnologia BIM. Un cop identificades, es planteja la seva superació a través de l’aplicació de tecnologies de la Web Semàntica. Per a això, s’analitza la capacitat d’aquestes tecnologies per a integrar dades heterogènies de diferents fonts i àmbits mitjançant ontologies. Finalment, es considera la seva aplicació en el desenvolupament de projectes AEC. A partir d’aquest estudi previ, s’ha pogut concloure que les solucions per millorar la interoperabilitat entre BIM i altres aplicacions a partir de les tecnologies semàntiques estan lluny de proporcionar una solució definitiva al problema de la interoperabilitat. Per tal de proposar solucions basades en la Web Semàntica per a la integració de dades en processos en què intervenen les tecnologies BIM, s’ha acotat la investigació a un cas d’estudi: la creació d’un catàleg de components prefabricats de formigó amb tecnologies de la Web Semàntica i compatible amb la tecnologia BIM. En el context d’aquest cas d’estudi s’han desenvolupat mètodes i eines per a: 1) integrar dades de components i productes constructius en un catàleg amb contingut semàntic accessible a aplicacions BIM, i 2) aplicar regles d’inferència semàntica per examinar els components inclosos en un model BIM i proporcionar productes compatibles extrets del catàleg. La viabilitat dels mètodes i eines s’ha demostrat en un cas d’aplicació: pre-dimensionat d’elements constructius que compleixen les normatives de seguretat estructural i recerca automatitzada de components alternatius en el catàleg. Tot i demostrar el benefici potencial de les tecnologies de la Web Semàntica per millorar els processos BIM integrant dades externes, encara hi ha alguns reptes a superar, entre ells, l’escassetat de dades en format RDF i la dificultat en mantenir els enllaços entre dades quan aquests canvien. Els resultats obtinguts en aquesta investigació podrien continuar desenvolupant-se en dues direccions: 1) ampliant el catàleg a nous productes i incorporant noves fonts de dades relacionades amb els mateixos i 2) creant eines que facilitin la creació i el manteniment de regles d’inferència.
En la industria AEC (Arquitectura, Ingeniería, Construcción), es cada vez más necesario automatizar el intercambio de información en los procesos en los que interviene la tecnología BIM (Building Information Modelling). Los expertos que participan en estos procesos (arquitectos, ingenieros, constructores, etc.) utilizan diferentes tipos de aplicaciones para llevar a cabo tareas específicas de acuerdo a su ámbito de conocimiento y su responsabilidad. Aunque cada una de estas aplicaciones, separadamente, cumple su función, la interoperabilidad entre ellas sigue siendo un problema a resolver. En estos procesos se requiere acceder a datos de fuentes diversas y distintos formatos, para integrarlos y hacerlos accesibles a las aplicaciones BIM. En esta tesis se investigan las dificultades que subyacen en estos dos ámbitos –la interoperabilidad entre aplicaciones y la integración de información de múltiple fuentes y formatos en el contexto de los procesos basados en tecnologías BIM– y se proponen soluciones para superarlas. En primer lugar, se han examinado las ineficiencias que actualmente existen en el intercambio de información entre sistemas y aplicaciones utilizados en proyectos AEC que emplean la tecnología BIM. Una vez identificadas, se plantea su superación a través de la aplicación de tecnologías de la Web Semántica. Para ello, se analiza la capacidad de estas tecnologías para integrar datos heterogéneos de diferentes fuentes y ámbitos mediante ontologías. Finalmente, se considera su aplicación en el desarrollo de proyectos AEC. A partir de este estudio previo, se ha podido concluir que las soluciones para mejorar la interoperabilidad entre BIM y otras aplicaciones a partir de las tecnologías semánticas están lejos de proporcionar una solución definitiva al problema de la interoperabilidad. Con el fin de proponer soluciones basadas en la Web Semántica para la integración de datos en procesos en los que intervienen las tecnologías BIM, se ha acotado la investigación a un caso de estudio: la creación de un catálogo de componentes prefabricados de hormigón con tecnologías de la Web Semántica y compatible con la tecnología BIM. En el contexto de este caso de estudio se han desarrollado métodos y herramientas para: 1) integrar datos de componentes y productos constructivos en un catálogo con contenido semántico accesible a aplicaciones BIM, y 2) aplicar reglas de inferencia semántica para examinar los componentes incluidos en un modelo BIM y proporcionar productos compatibles extraídos del catálogo. La viabilidad de los métodos y herramientas se ha demostrado en un caso de aplicación: pre-dimensionado de elementos constructivos que cumplen las normativas de seguridad estructural y búsqueda automatizada de componentes alternativos en el catálogo. A pesar de demostrar el beneficio potencial de las tecnologías de la Web Semántica para mejorar los procesos BIM integrando datos externos, todavía hay algunos retos a superar, entre ellos, la escasez de datos en formato RDF y la dificultad en mantener los enlaces entre datos cuando estos cambian. Los resultados obtenidos en esta investigación podrían continuar desarrollándose en dos direcciones: 1) ampliando el catálogo a nuevos productos e incorporando nuevas fuentes de datos relacionadas con los mismos y 2) creando herramientas que faciliten la creación y el mantenimiento de reglas de inferencia.
In the AEC industry (Architecture, Engineering, Construction), it is increasingly necessary to automate the exchange of information in processes involving BIM (Building Information Modelling) technology. The experts involved in these processes (architects, engineers, builders, etc.) use different types of applications to carry out specific tasks according to their scope of knowledge and their responsibility. Although each of these applications separately fulfils its function, interoperability between them remains a problem to be solved. In these processes it is also necessary to access data from different sources and different formats to integrate them and make them accessible to BIM applications. This research investigates the difficulties that underlie these two problems – interoperability between applications and the integration of information in the context of processes based on BIM technologies – and propose solutions to overcome them. In the first place, the inefficiencies that currently exist in the exchange of information between systems and applications used in AEC projects using BIM technology have been examined. Once identified, our objective has been to overcome them through the application of Semantic Web technologies. To do this, the ability of these technologies to integrate heterogeneous data from different sources and domains using ontologies is analysed. Finally, we considered their application in the development of AEC projects. From this previous study, it has been concluded that developed solutions to improve interoperability between BIM and other applications using semantic technologies are still far from providing a definitive solution to the problem of interoperability. In order to propose solutions based on the Semantic Web for the integration of data in processes involving BIM technologies, the research has been limited to a case study: the creation of a catalogue of precast concrete components with semantic technologies which are compatible with BIM technology. In the context of this case study, we have developed methods and tools to (1) integrate data on components and constructive products in a catalogue with semantic content compatible with BIM technology, and (2) apply the rules of semantic inference to examine the components used on a BIM model and provide compatible products extracted from the catalogue. The feasibility of the methods and tools has been demonstrated in an application case: pre-dimensioned structural elements that comply with structural safety regulations and the automated search of alternative components in the catalogue. Despite demonstrating the potential of Semantic Web technologies to improve BIM processes by integrating external data, there are still some challenges to overcome, including the shortage of data in RDF format and the difficulty in the maintenance of the links between the data when they change. The results obtained in this research could continue to be developed in two directions (1) expanding the catalogue to new products and integrating new data sources related to them and (2) creating tools that facilitate the creation and maintenance of inference rules.
APA, Harvard, Vancouver, ISO, and other styles
9

Amdal, Ingunn. "Learning pronunciation variation : A data-driven approach to rule-based lecxicon adaptation for automatic speech recognition." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1560.

Full text
Abstract:

To achieve a robust system the variation seen for different speaking styles must be handled. An investigation of standard automatic speech recognition techniques for different speaking styles showed that lexical modelling using general-purpose variants gave small improvements, but the errors differed compared with using only one canonical pronunciation per word. Modelling the variation using the acoustic models (using context dependency and/or speaker dependent adaptation) gave a significant improvement, but the resulting performance for non-native and spontaneous speech was still far from read speech.

In this dissertation a complete data-driven approach to rule-based lexicon adaptation is presented, where the effect of the acoustic models is incorporated in the rule pruning metric. Reference and alternative transcriptions were aligned by dynamic programming, but with a data-driven method to derive the phone-to-phone substitution costs. The costs were based on the statistical co-occurrence of phones, association strength. Rules for pronunciation variation were derived from this alignment. The rules were pruned using a new metric based on acoustic log likelihood. Well trained acoustic models are capable of modelling much of the variation seen, and using the acoustic log likelihood to assess the pronunciation rules prevents the lexical modelling from adding variation already accounted for as shown for direct pronunciation variation modelling.

For the non-native task data-driven pronunciation modelling by learning pronunciation rules gave a significant performance gain. Acoustic log likelihood rule pruning performed better than rule probability pruning.

For spontaneous dictation the pronunciation variation experiments did not improve the performance. The answer to how to better model the variation for spontaneous speech seems to lie neither in the acoustical nor the lexical modelling. The main differences between read and spontaneous speech are the grammar used and disfluencies like restarts and long pauses. The language model may thus be the best starting point for more research to achieve better performance for this speaking style.

APA, Harvard, Vancouver, ISO, and other styles
10

Júnior, Valdemar Lacerda. "Aplicações de técnicas de RMN à determinação estrutural de intermediários sintéticos." Universidade de São Paulo, 2000. http://www.teses.usp.br/teses/disponiveis/59/59138/tde-02102001-115811/.

Full text
Abstract:
A conhecida regra do acoplamento em W, que estabelece que núcleos na conformação em W planar exibem valores significativos de constantes de acoplamento através de quatro ligações, tem sido uma ferramenta útil para a determinação estrutural desde o início do uso da RMN para essa finalidade. Muitas configurações e conformações foram decididas com base nessa regra. A contínua evolução do equipamento de RMN, porém, resulta em modificações na qualidade e no número dos dados experimentais obtidos, obrigando os químicos a freqüentes revisões de seus pontos de vista sobre a importância relativa dos dados que podem ser obtidos dos espectros de RMN. O equipamento mais recente tem uma resolução maior e várias características adicionais que obscurecem um pouco os conceitos mais antigos: por um lado, o alto valor informativo de algumas técnicas modernas tais como NOE DIFF e outros métodos multi-dimensionais reduzem a importância das constantes de acoplamento; por outro lado, agora é possível determinar um número muito maior de constantes de acoplamento, devido principalmente à maior resolução. Uma conseqüência natural é que o químico agora pode explorar o uso de desdobramentos sutis em análise conformacional. Como parte de nosso trabalho de pesquisa na síntese de produtos naturais, preparamos há algum tempo um número apreciável de derivados de ciclopentanos. Nossa atenção foi fortemente atraída para certas constantes de acoplamento a longa distância (4JHH) que ocorriam em alguns destes compostos, já que uma conformação W planar não parece ser possível em ciclopentanos. Decidimos então iniciar um estudo mais detalhado dos espectros de RMN desses compostos, com vistas a uma interpretação mais clara dos dados. Inicialmente fizemos as atribuições para todos os hidrogênios, incluindo a estéreo-química de cada um, e medimos todos os valores de J para os compostos; para tanto fizemos extenso uso dos espectros de RMN tanto de 1H (300 MHz)como de 13C (75 MHz), medidas de efeito NOE, etc. Os ângulos entre as ligações e os ângulos diedros (de torção) foram calculados com programas de modelagem molecular; vários programas e métodos diferentes foram usados, para aumentar a confiabilidade. O primeiro resultado obtido é a confirmação de que 4JHH 'diferente' 0 pode ocorrer mesmo em casos em que uma conformação W planar não é possível. Mais importante, porém, é a conclusão de que há uma relação entre os valores de 4JHH e os ângulos diedros envolvidos. O acoplamento entre H1 e H2 ocorre através das quatro ligações 'sigma' definidas pelo caminho H1-C1-C2-C3-H2 e envolve dois ângulos diedros 'teta'1 e 'teta'2. Os valores de 4JHH foram plotados contra (cos2'teta'1 x cos2'teta'2) (uma simples extensão da equação de Karplus); os pontos resultantes não se alinham com perfeição sobre uma curva contínua, mas mostram clara tendência de aumento do valor de 4JHH conforme os ângulos 'teta'1 e 'teta'2 se afastam de 90º e se aproximam de 180º.
The well known W rule, which establishes that nuclei in a planar W arrangement exhibit significant four bond coupling constants, has been a useful tool in molecular structure determination since early times of the use of nmr spectra for this purpose. Many configurations and conformations have been decided based on this rule. The continuous evolution of the nmr equipment, however, produces modifications in quality and number of available experimental data, thus forcing the chemists to frequent revisions of their points of view about the relative importance of the data that can be obtained from nmr spectra. The more recent equipment has a higher resolution and several additional features that throw some shadow over early concepts: on one hand, the high power of modern techniques such as NOE DIFF and other multi-dimensional methods reduce the importance of coupling constants; on the other hand, it is now possible to determine many more coupling constants, due mainly to the higher resolution. A natural consequence is that the chemist can now exploit the use of subtle splittings in conformational analysis. As part of our research work on the synthesis of natural products, we have prepared a number of cyclopentane derivatives. Our attention was strongly attracted to the long-range (4JHH) coupling constants that occurred in some of these compounds, as no planar W conformation seems to be possible in cyclopentanes. We have thus decided to start a more detailed study of the nmr spectra of these compounds, seeking for more clear interpretation of the data. We have first assigned all hydrogens, including the stereochemistry of each hydrogen, and measured all J values for the compounds; for this task we have used nmr spectra both of 1H (300 MHz) and 13C (75 MHz), NOE measurements, etc. Bond angles and dihedral (torsion) angles were calculated with molecular modeling programs; several different programs and methods were used to improve reliability. The first result obtained is a confirmation that a 4JHH 'different' 0 occurs even in cases where a planar W conformation is not possible. More important, however, is the conclusion that there is a correlation between the 4JHH values and the involved dihedral angles. There are two dihedral angles in the path through the bonds between two hydrogens which show 4JHH coupling. When 4JHH values are plotted against(cos2'teta'1 x cos2'teta'2) (a simple extension of Karplus equation) the points are not aligned over a continuous curve, but they show a clear tendency: 4JHH values become higher as the angles 'teta'1 and 'teta'2 vary from 90 to 180º.
APA, Harvard, Vancouver, ISO, and other styles
11

au, skhor@iinet net, and Sebastian Wankun Khor. "A Fuzzy Knowledge Map Framework for Knowledge Representation." Murdoch University, 2007. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070822.32701.

Full text
Abstract:
Cognitive Maps (CMs) have shown promise as tools for modelling and simulation of knowledge in computers as representation of real objects, concepts, perceptions or events and their relations. This thesis examines the application of fuzzy theory to the expression of these relations, and investigates the development of a framework to better manage the operations of these relations. The Fuzzy Cognitive Map (FCM) was introduced in 1986 but little progress has been made since. This is because of the difficulty of modifying or extending its reasoning mechanism from causality to relations other than causality, such as associative and deductive reasoning. The ability to express the complex relations between objects and concepts determines the usefulness of the maps. Structuring these concepts and relations in a model so that they can be consistently represented and quickly accessed and anipulated by a computer is the goal of knowledge representation. This forms the main motivation of this research. In this thesis, a novel framework is proposed whereby single-antecedent fuzzy rules can be applied to a directed graph, and reasoning ability is extended to include noncausality. The framework provides a hierarchical structure where a graph in a higher layer represents knowledge at a high level of abstraction, and graphs in a lower layer represent the knowledge in more detail. The framework allows a modular design of knowledge representation and facilitates the creation of a more complex structure for modelling and reasoning. The experiments conducted in this thesis show that the proposed framework is effective and useful for deriving inferences from input data, solving certain classification problems, and for prediction and decision-making.
APA, Harvard, Vancouver, ISO, and other styles
12

Narasigadu, Caleb. "Design of a static micro-cell for phase equilibrium measurements : measurements and modelling." Paris, ENMP, 2011. https://pastel.hal.science/pastel-00679369.

Full text
Abstract:
Cette étude couvre la conception d'un nouvel appareil qui permet la mesure fiable de pressions de vapeur d'équilibres à plusieurs phases à partir de petits volumes (un maximum de 18 cm3). Les mesures d'équilibres de phase concernant la présente étude incluent : des équilibres "liquide-vapeur" (ELV), "liquide-liquide" (ELL) et " liquide-liquide-vapeur" (ELLV). La température de fonctionnement de l'appareil s'étend de 253 à 473 K pour une pression de fonctionnement qui s'étend du vide absolu à 1600 kPa. Le prélèvement des phases est réalisé grâce au Rapid On line Sampling Injector (ROLSI™). Une technique originale est ajoutée en complément du ROLSI™ pour éviter des chutes de pressions lors du prélèvement. Cette technique utilise une tige métallique afin de compenser les changements de volume lors des prélèvements. Des mesures de tensions de vapeur et d'équilibres de phase ont été entreprises pour caractériser le fonctionnement de l'appareil conçu et développé. Ensuite de nouvelles mesures de tensions de vapeur et d'ELV ont été mesurées sur des systèmes intéressant les compagnies pétrochimiques. Les données expérimentales de pression de vapeur obtenues ont été régressées en utilisant les équations étendues d'Antoine et de Wagner. Les données expérimentales d'ELV mesurées ont été régressées avec des modèles thermodynamiques au moyen des méthodes directes et combinées. Pour la méthode directe les équations d'état de Soave-Redlich-Kwong et de Peng-Robinson ont été employées avec la fonction (α) de Mathias et Copeman (1983) dépendante de la température. Pour la méthode combinée, l'équation du viriel (deuxième coefficient du viriel de la corrélation de Tsonopoulos (1974)) a été employée associée à un modèle de solution (coefficient d'activité) pour la phase liquide: TK-Wilson, NRTL et UNIQUAC modifié. Des tests de cohérence thermodynamique ont été exécutés pour toutes les données expérimentales de VLE mesurées. Presque tous les systèmes mesurés ont déclarés thermodynamiquement cohérents (test de point de Van Ness et autres (1973) et test direct de Van Ness (1995)
This study covers the design of a new apparatus that enables reliable vapour pressure and equilibria measurements for multiple liquid and vapour phases of small volumes (a maximum of 18 cm3). These phase equilibria measurements include: vapour-liquid equilibrium (VLE), liquid-liquid equilibrium (LLE) and vapour-liquid-liquid (VLLE). The operating temperature of the apparatus ranges from 253 to 473 K and the operating pressure ranges from absolute vacuum to 1600 kPa. The sampling of the phases is accomplished using a single Rapid-OnLine-Sampler-Injector (ROLSITM). A novel technique is used to achieve sampling for each phase. The technique made use of a metallic rod in an arrangement to compensate for volume changes during sampling. As part of this study, vapour pressure and phase equilibrium data were measured to test the operation of the newly developed apparatus. New experimental vapour pressure and VLE data were also measured for systems of interest to petrochemical companies. The experimental vapour pressure data obtained were regressed using the extended Antoine and Wagner equations. The experimental VLE data measured were regressed with thermodynamic models using the direct and combined methods. For the direct method the Soave-Redlich-Kwong and Peng-Robinson equations of state were used with the temperature dependent function (α) of Mathias and Copeman (1983). For the combined method, the virial equation of state with the second virial coefficient correlation of Tsonopoulos (1974) was used together with one of the following liquid-phase activity coefficient model: TK-Wilson, NRTL and modified UNIQUAC. Thermodynamic consistency testing was also performed for all the VLE experimental data measured where almost all the systems measured showed good thermodynamic consistency for the point test of Van Ness et al. (1973) and direct test of Van Ness (1995)
APA, Harvard, Vancouver, ISO, and other styles
13

Norström, Per. "Technology education and non-scientific technological knowledge." Licentiate thesis, KTH, Filosofi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48237.

Full text
Abstract:
This thesis consists of two essays and an introduction. The main theme is technological knowledge that is not based on the natural sciences.The first essay is about rules of thumb, which are simple instructions, used to guide actions toward a specific result, without need of advanced knowledge. Knowing adequate rules of thumb is a common form of technological knowledge. It differs both from science-based and intuitive (or tacit) technological knowledge, although it may have its origin in experience, scientific knowledge, trial and error, or a combination thereof. One of the major advantages of rules of thumb is the ease with which they can be learned. One of their major disadvantages is that they cannot easily be adjusted to new situations or conditions. Engineers commonly use rules, theories and models that lack scientific justification. How to include these in introductory technology education is the theme of the second essay. Examples include rules of thumb based on experience, but also models based on obsolete science or folk theories. Centrifugal forces, heat and cold as substances, and sucking vacuum all belong to the latter group. These models contradict scientific knowledge, but are useful for prediction in limited contexts where they are used when found convenient. The role of this kind of models in technology education is the theme of the second essay. Engineers’ work is a common prototype for pupils’ work with product development and systematic problem solving during technology lessons. Therefore pupils should be allowed to use the engineers’ non-scientific models when doing design work in school technology. The acceptance of these could be experienced as contradictory by the pupils: a model that is allowed, or even encouraged in technology class is considered wrong when doing science. To account for this, different epistemological frameworks must be used in science and technology education. Technology is first and foremost about usefulness, not about the truth or even generally applicable laws. This could cause pedagogical problems, but also provide useful examples to explain the limitations of models, the relation between model and reality, and the differences between science and technology.

QC 20111118

APA, Harvard, Vancouver, ISO, and other styles
14

Goebes, Philipp, Karsten Schmidt, Felix Stumpf, Oheimb Goddert von, Thomas Scholten, Werner Härdtle, and Steffen Seitz. "Rule-based analysis of throughfall kinetic energy to evaluate biotic and abiotic factor thresholds to mitigate erosive power." Sage, 2016. https://tud.qucosa.de/id/qucosa%3A35382.

Full text
Abstract:
Below vegetation, throughfall kinetic energy (TKE) is an important factor to express the potential of rainfall to detach soil particles and thus for predicting soil erosion rates. TKE is affected by many biotic (e.g. tree height, leaf area index) and abiotic (e.g. throughfall amount) factors because of changes in rain drop size and velocity. However, studies modelling TKE with a high number of those factors are lacking. This study presents a new approach to model TKE. We used 20 biotic and abiotic factors to evaluate thresholds of those factors that can mitigate TKE and thus decrease soil erosion. Using these thresholds, an optimal set of biotic and abiotic factors was identified to minimize TKE. The model approach combined recursive feature elimination, random forest (RF) variable importance and classification and regression trees (CARTs). TKE was determined using 1405 splash cup measurements during five rainfall events in a subtropical Chinese tree plantation with five-year-old trees in 2013. Our results showed that leaf area, tree height, leaf area index and crown area are the most prominent vegetation traits to model TKE. To reduce TKE, the optimal set of biotic and abiotic factors was a leaf area lower than 6700mm2, a tree height lower than 290 cm combined with a crown base height lower than 60 cm, a leaf area index smaller than 1, more than 47 branches per tree and using single tree species neighbourhoods. Rainfall characteristics, such as amount and duration, further classified high or low TKE. These findings are important for the establishment of forest plantations that aim to minimize soil erosion in young succession stages using TKE modelling.
APA, Harvard, Vancouver, ISO, and other styles
15

Arora, Siddharth. "Time series forecasting with applications in macroeconomics and energy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:c763b735-e4fa-4466-9c1f-c3f6daf04a67.

Full text
Abstract:
The aim of this study is to develop novel forecasting methodologies. The applications of our proposed models lie in two different areas: macroeconomics and energy. Though we consider two very different applications, the common underlying theme of this thesis is to develop novel methodologies that are not only accurate, but are also parsimonious. For macroeconomic time series, we focus on generating forecasts for the US Gross National Product (GNP). The contribution of our study on macroeconomic forecasting lies in proposing a novel nonlinear and nonparametric method, called weighted random analogue prediction (WRAP) method. The out-of-sample forecasting ability of WRAP is evaluated by employing a range of different performance scores, which measure its accuracy in generating both point and density forecasts. We show that WRAP outperforms some of the most commonly used models for forecasting the GNP time series. For energy, we focus on two different applications: (1) Generating accurate short-term forecasts for the total electricity demand (load) for Great Britain. (2) Modelling Irish electricity smart meter data (consumption) for both residential consumers and small and medium-sized enterprises (SMEs), using methods based on kernel density (KD) and conditional kernel density (CKD) estimation. To model load, we propose methods based on a commonly used statistical dimension reduction technique, called singular value decomposition (SVD). Specifically, we propose two novel methods, namely, discount weighted (DW) intraday and DW intraweek SVD-based exponential smoothing methods. We show that the proposed methods are competitive with some of the most commonly used models for load forecasting, and also lead to a substantial reduction in the dimension of the model. The load time series exhibits a prominent intraday, intraweek and intrayear seasonality. However, most existing studies accommodate the ‘double seasonality’ while modelling short-term load, focussing only on the intraday and intraweek seasonal effects. The methods considered in this study accommodate the ‘triple seasonality’ in load, by capturing not only intraday and intraweek seasonal cycles, but also intrayear seasonality. For modelling load, we also propose a novel rule-based approach, with emphasis on special days. The load observed on special days, e.g. public holidays, is substantially lower compared to load observed on normal working days. Special day effects have often been ignored during the modelling process, which leads to large forecast errors on special days, and also on normal working days that lie in the vicinity of special days. The contribution of this study lies in adapting some of the most commonly used seasonal methods to model load for both normal and special days in a coherent and unified framework, using a rule-based approach. We show that the post-sample error across special days for the rule-based methods are less than half, compared to their original counterparts that ignore special day effects. For modelling electricity smart meter data, we investigate a range of different methods based on KD and CKD estimation. Over the coming decade, electricity smart meters are scheduled to replace the conventional electronic meters, in both US and Europe. Future estimates of consumption can help the consumer identify and reduce excess consumption, while such estimates can help the supplier devise innovative tariff strategies. To the best of our knowledge, there are no existing studies which focus on generating density forecasts of electricity consumption from smart meter data. In this study, we evaluate the density, quantile and point forecast accuracy of different methods across one thousand consumption time series, recorded from both residential consumers and SMEs. We show that the KD and CKD methods accommodate the seasonality in consumption, and correctly distinguish weekdays from weekends. For each application, our comprehensive empirical comparison of the existing and proposed methods was undertaken using multiple performance scores. The results show strong potential for the models proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
16

Håkansson, Anne. "Graphic Representation and Visualisation as Modelling Support for the Knowledge Acquisition Process." Doctoral thesis, Uppsala University, Computer Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-3812.

Full text
Abstract:

The thesis describes steps taken towards using graphic representation and visual modelling support for the knowledge acquisition process in knowledge-based systems – a process commonly regarded as difficult. The performance of the systems depends on the quality of the embedded knowledge, which makes the knowledge acquisition phase particularly significant. During the acquisition phase, a main obstacle to proper extraction of information is the absence of effective modelling techniques.

The contributions of the thesis are: introducing a methodology for user-centred knowledge modelling, enhancing transparency to support the modelling of content and of the reasoning strategy, incorporating conceptualisation to simplify the grasp of the contents and to support assimilation of the domain knowledge, and supplying a visual compositional logic programming language for adding and modifying functionality.

The user-centred knowledge acquisition model, proposed in this thesis, applies a combination of different approaches to knowledge modelling. The aim is to bridge the gap between the users (i.e., knowledge engineers, domain experts and end users) and the system in transferring knowledge, by supporting the users through graphics and visualisation. Visualisation supports the users by providing several different views of the contents of the system.

The Unified Modelling Language (UML) is employed as a modelling language. A benefit of utilising UML is that the knowledge base can be modified, and the reasoning strategy and the functionality can be changed directly in the model. To make the knowledge base more comprehensible and expressive, we incorporated visual conceptualisation into UML’s diagrams to describe the contents. Visual conceptualisation of the knowledge can also facilitate assimilation in a hypermedia system through visual libraries.

Visualisation of functionality is applied to a programming paradigm, namely relational programming, often employed in artificial intelligence systems. This approach employs Venn-Euler diagrams as a graphic interface to a compositional operator based relational programming language.

The concrete result of the research is the development of a graphic representation and visual modelling approach to support the knowledge acquisition process. This approach has been evaluated for two different knowledge bases, one built for hydropower development and river regulation and the other for diagnosing childhood diseases.

APA, Harvard, Vancouver, ISO, and other styles
17

Ong, Yongzhi [Verfasser], Winfried [Akademischer Betreuer] Kurth, and Hans-Jörg [Akademischer Betreuer] Kreowski. "Extension of the Rule-Based Programming Language XL by Concepts for Multi-Scaled Modelling and Level-of-Detail Visualization / Yongzhi Ong. Gutachter: Winfried Kurth ; Hans-Jörg Kreowski. Betreuer: Winfried Kurth." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2015. http://d-nb.info/1070423653/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Crisp, Jennifer J. "Asset Management in Electricity Transmission Enterprises: Factors that affect Asset Management Policies and Practices of Electricity Transmission Enterprises and their Impact on Performance." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15884/.

Full text
Abstract:
This thesis draws on techniques from Management Science and Artificial Intelligence to explore organisational aspects of asset management in electricity transmission enterprises. In this research, factors that influence policies and practices of asset management within electricity transmission enterprises have been identified, in order to examine their interaction and how they impact the policies, practices and performance of transmission businesses. It has been found that, while there is extensive literature on the economics of transmission regulation and pricing, there is little published research linking the engineering and financial aspects of transmission asset management at a management policy level. To remedy this situation, this investigation has drawn on a wide range of literature, together with expert interviews and personal knowledge of the electricity industry, to construct a conceptual model of asset management with broad applicability across transmission enterprises in different parts of the world. A concise representation of the model has been formulated using a Causal Loop Diagram (CLD). To investigate the interactions between factors of influence it is necessary to implement the model and validate it against known outcomes. However, because of the nature of the data (a mix of numeric and non-numeric data, imprecise, incomplete and often approximate) and complexity and imprecision in the definition of relationships between elements, this problem is intractable to modelling by traditional engineering methodologies. The solution has been to utilise techniques from other disciplines. Two implementations have been explored: a multi-level fuzzy rule-based model and a system dynamics model; they offer different but complementary insights into transmission asset management. Each model shows potential for use by transmission businesses for strategic-level decision support. The research demonstrates the key impact of routine maintenance effectiveness on the condition and performance of transmission system assets. However, performance of the transmission network, is not only related to equipment performance, but is a function of system design and operational aspects, such as loading and load factor. Type and supportiveness of regulation, together with the objectives and corporate culture of the transmission organisation also play roles in promoting various strategies for asset management. The cumulative effect of all these drivers is to produce differences in asset management policies and practices, discernable between individual companies and at a regional level, where similar conditions have applied historically and today.
APA, Harvard, Vancouver, ISO, and other styles
19

Schutze, Mark Kurt. "The significance of genetic and ecological diversity in a wide-ranging insect pest, Paropsis atomaria Olivier (Coleoptera: Chrysomelidae)." Queensland University of Technology, 2008. http://eprints.qut.edu.au/16666/.

Full text
Abstract:
Paropsis atomaria (Coleoptera; Chrysomelidae) is a eucalypt feeding leaf beetle endemic to southern and east coast Australia, and it is an emergent pest of the eucalypt hardwood industry. Paropsis atomaria was suspected to be a cryptic species complex based on apparent differences in life history characteristics between populations, its wide geographical distribution, and extensive host range within Eucalyptus. In this study genetic and ecological characters of P. atomaria were examined to determine the likelihood of a cryptic complex, and to identify the nature and causes of ecological variation within the taxon. Mitochondrial sequence variation of the gene COI was compared between populations from the east coast of Australia (South Australia to central Queensland) to assess genetic divergence between individuals from different localities and host plant of origin. Individuals from four collection localities used for the molecular analysis were then compared in a morphometric study to determine if observed genetic divergence was reflected by morphology, and common-garden trials using individuals from Lowmead (central Qld) and Canberra (ACT) were conducted to determine if morphological (body size) variation had a genetic component. Host plant utilisation (larval survival, development time, and pupal weight) by individuals from Lowmead and Canberra were then compared to determine whether differential host plant use had occurred between populations of P. atomaria; individuals from each population were reared on an allopatric and sympatric host eucalypt species (E. cloeziana and E. pilularis). Finally, developmental data from each population was compared and incorporated into a phenology modelling program (Dymex(tm)) using temperature as the principle factor explaining and predicting population phenology under field conditions. Molecular results demonstrated relatively low genetic divergence between populations of P. atomaria which is concomitant with the single species hypothesis, however, there is reduced gene flow between northern and southern populations, but no host plant related genetic structuring. Morphometric data revealed insufficient evidence to separate populations into different taxa; however a correlation between latitude and size of adults was discovered, with larger beetles found at lower latitudes (i.e., adhering to a converse Bergmann cline). Common garden experiments revealed body size to be driven by both genetic and environmental components. Host plant utilisation trials showed one host plant, E. cloeziana, to be superior for both northern and southern P. atomaria populations (increased larval survival and reduced larval development time). Eucalyptus pilularis had a negative effect on pupal weight for Lowmead (northern) individuals (to which it is allopatric), but not so for Canberra (southern) individuals. DYMEX(tm) modelling showed voltinism to be a highly plastic trait driven largely by temperature. Results from across all trials suggest that P. atomaria represents a single species with populations locally adapted to season length, with no evidence of differential host plant utilisation between populations. Further, voltinism is a seasonally plastic trait driven by temperature, but with secondary influential factors such as host plant quality. These data, taken combined, reveal phenotypic variability within P. atomaria as the product of multiple abiotic and biotic factors and representing a complex interplay between local adaptation, phenotypic plasticity, and seasonal plasticity. Implications for pest management include an understanding of population structure, nature of local adaptation and host use characteristics, and predictive models for development of seasonal control regimens.
APA, Harvard, Vancouver, ISO, and other styles
20

Leshi, Olumide. "An Approach to Extending Ontologies in the Nanomaterials Domain." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170255.

Full text
Abstract:
As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows. In this study, the effort is to continue investigating ways by which some tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies. A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies
APA, Harvard, Vancouver, ISO, and other styles
21

Klompje, Gideon. "A parametric monophone speech synthesis system." Thesis, Link to online version, 2006. http://hdl.handle.net/10019/561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Siddiqui, Muhammad Shahid. "Three Essays on Environmental Economics and on Credit Market Imperfections." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20161.

Full text
Abstract:
This dissertation contains three essays on environmental economics and on credit market imperfections. The literature on carbon tax incidence generally finds that carbon taxes have a regressive impact on the distribution of income. The main reason for that finding stems from the fact that poor households spend a larger share of their total expenditure on energy products than the rich households do. This literature, however, has ignored the impact of carbon taxes on income stemming from changes in relative factor prices. Yet, changes in household welfare depend not only on variations in commodity prices, but also on changes in income. Chapter 1 provides a comprehensive analysis of the distributional impact of carbon taxes on inequality by considering both demand-side and supply-side channels. We use a multi-sector, multi-household general equilibrium model to analyze the distributional impact of carbon taxes on inequality. Using equivalent income as the household welfare metric, we apply the Shapley value and concentration index approaches to decomposing household inequality. Our simulation results suggest that carbon taxes exert a larger negative impact on the income of the rich than that of the poor, and are thereby progressive. On the other hand, when assessed from the use side alone (i.e., commodity prices alone), our results confirm previous findings, whereas carbon taxes are regressive. However, due to the stronger incidence of carbon taxes on inequality from the income side, our results suggest that the carbon tax tends to reduce inequality. These findings further suggest that the traditional approach of assessing the impact of carbon taxes on inequality through changes in commodity prices alone may be misleading. Chapter 2 investigates the economic impacts of creating an emissions bubble between Canada and the US in a context of subglobal participation in efforts to reduce pollution with market based-instruments. One of the advantages of an emissions bubble is that it can be beneficial to countries that differ in their production and consumption patterns. To address the competitiveness issue that arises from the free-rider problem in the area of climate-change mitigation, we consider the imposition of a border tax adjustment (BTA) - a commonly suggested solution in the literature. We develop a detailed multisector and multi-regional general equilibrium model to analyze the welfare, aggregate, sectoral and trade impacts of the formation of an emissions bubble between Canada and the US with and without BTA. Our simulation results suggest that, in the absence of BTA, the creation of the bubble would make both countries better off through a positive terms-of-trade effect, and more importantly, through a significant reduction in Canada’s marginal abatement cost. The benefits of these positive effects would spill over to the non-participating countries, leading them to increase their trade shares in non-emissions-intensive goods. Moreover, the simulation results also indicate that a unilateral implementation of a BTA by any one of the two countries is welfare deteriorating in the imposing country and welfare improving in the other. In contrast, a joint implementation of a BTA by the two countries would make Canada better off and the US worse off. Chapter 3 shows that learning by lending is a potential channel of understanding the business cycle fluctuation under an imperfect credit market. An endogenous link among the learning parameter, lending rates, and the size of investment makes it possible to generate an internal propagation even due to a temporary shock. The main finding of this chapter is the explanation of how ex post non-financial factors such as information losses by individual agents in a credit market may account for a persistence in real indicators such as capital stock and output.
APA, Harvard, Vancouver, ISO, and other styles
23

Ismail, Emad Abbas. "Highway intersections with alternative priority rules." Thesis, University of Bradford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hunt, Andrew. "Rules for modelling in computer-aided fault tree synthesis." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/27982.

Full text
Abstract:
In the design of process plants safety has assumed an increasingly high profile. One of the techniques used in hazard identification is the fault tree, which involves first the synthesis of the tree and then its analysis. The construction of a fault tree, however, requires special skills and can be a time-consuming process. It is therefore attractive to develop computer aids for the synthesis stage to match those which already exist for the analysis of the tree. A computer based system for fault tree synthesis has been developed at Loughborough University. This thesis is part of a continuing programme of work associated with this facility.
APA, Harvard, Vancouver, ISO, and other styles
25

McDermid, Donald. "The Development of the Business Rules Diagram." Curtin University of Technology, School of Information Systems, 1998. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=9382.

Full text
Abstract:
This thesis concerns the development of a diagramming technique which assists in the specification of information systems requirements. The technique is called the Business Rules Diagram (BRD) although earlier versions were given different names. The term development in the title of this thesis is defined here to include both the work involved in designing the BRD as well as testing its usefulness. So, the scope of this research covers research activity starting from the original idea for the diagram through to testing its usefulness.Action research was the research method used. In all, two major action research studies were undertaken. The first involved working with an analyst only. The second involved working with an analyst and users.
APA, Harvard, Vancouver, ISO, and other styles
26

Maiyama, Kabiru M. "Performance Analysis of Virtualisation in a Cloud Computing Platform. An application driven investigation into modelling and analysis of performance vs security trade-offs for virtualisation in OpenStack infrastructure as a service (IaaS) cloud computing platform architectures." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18587.

Full text
Abstract:
Virtualisation is one of the underlying technologies that led to the success of cloud computing platforms (CCPs). The technology, along with other features such as multitenancy allows delivering of computing resources in the form of service through efficient sharing of physical resources. As these resources are provided through virtualisation, a robust agreement is outlined for both the quantity and quality-of-service (QoS) in a service level agreement (SLA) documents. QoS is one of the essential components of SLA, where performance is one of its primary aspects. As the technology is progressively maturing and receiving massive acceptance, researchers from industry and academia continue to carry out novel theoretical and practical studies of various essential aspects of CCPs with significant levels of success. This thesis starts with the assessment of the current level of knowledge in the literature of cloud computing in general and CCPs in particular. In this context, a substantive literature review was carried out focusing on performance modelling, testing, analysis and evaluation of Infrastructure as a Service (IaaS), methodologies. To this end, a systematic mapping study (SMSs) of the literature was conducted. SMS guided the choice and direction of this research. The SMS was followed by the development of a novel open queueing network model (QNM) at equilibrium for the performance modelling and analysis of an OpenStack IaaS CCP. Moreover, it was assumed that an external arrival pattern is Poisson while the queueing stations provided exponentially distributed service times. Based on Jackson’s theorem, the model was exactly decomposed into individual M/M/c (c ≥ 1) stations. Each of these queueing stations was analysed in isolation, and closed-form expressions for key performance metrics, such as mean response time, throughput, server (resource) utilisation as well as bottleneck device were determined. Moreover, the research was extended with a proposed open QNM with a bursty external arrival pattern represented by a Compound Poisson Process (CPP) with geometrically distributed batches, or equivalently, variable Generalised Exponential (GE) interarrival and service times. Each queueing station had c (c ≥ 1) GE-type servers. Based on a generic maximum entropy (ME) product form approximation, the proposed open GE-type QNM was decomposed into individual GE/GE/c queueing stations with GE-type interarrival and service times. The evaluation of the performance metrics and bottleneck analysis of the QNM were determined, which provided vital insights for the capacity planning of existing CCP architectures as well as the design and development of new ones. The results also revealed, due to a significant impact on the burstiness of interarrival and service time processes, resulted in worst-case performance bounds scenarios, as appropriate. Finally, an investigation was carried out into modelling and analysis of performance and security trade-offs for a CCP architecture, based on a proposed generalised stochastic Petri net (GSPN) model with security-detection control model (SDCM). In this context, ‘optimal’ combined performance and security metrics were defined with both M-type or GE-type arrival and service times and the impact of security incidents on performance was assessed. Typical numerical experiments on the GSPN model were conducted and implemented using the Möbius package, and an ‘optimal’ trade-offs were determined between performance and security, which are crucial in the SLA of the cloud computing services.
Petroleum technology development fund (PTDF) of the government of Nigeria Usmanu Danfodiyo University, Sokoto
APA, Harvard, Vancouver, ISO, and other styles
27

FERRARA, MARIA. "DISINFLAZIONE E CONSOLIDAMENTO FISCALE CON PARTECIPAZIONE LIMITATA AI MERCATI DEGLI ASSETS." Doctoral thesis, Università Cattolica del Sacro Cuore, 2014. http://hdl.handle.net/10280/4372.

Full text
Abstract:
1. Può un Modello DSGE spiegare una disinflazione costosa? Questo lavoro mostra che un modello DSGE non è in grado di spiegare una disinflazione costosa con indicizzazione parziale e bassa dei prezzi e dei salari. Il modello invece è in grado di replicare una disinflazione recessiva sostituendo il meccanismo di modellizazione delle rigidità nominali di Calvo (1983) con quello di Rotemberg (1982). 2. Disinflazione e Diseguaglianza in un Modello Monetario DSGE: Un’Analisi di Welfare Questo lavoro analizza gli effetti redistributivi di una politica disinflazionistica in un modello DSGE con Partecipazione Limitata ai Mercati degli Assets. Due sono i meccanismi che guidano a distribuzione del consumo e del reddito: il markup delle imprese e il cosiddetto vincolo cash in advance. I risultati suggeriscono che la disinflazione aumenta inequivocabilmente la diseguaglianza con il meccanismo di Rotemberg. Invece con il meccanismo di Calvo questo effetto viene ottenuto soltanto se le imprese non sono costrette ad indebitarsi per finanziare il fattore lavoro. 3. Consolidamento Fiscale e Consumatori Rule of Thumb Questo lavoro simula un esperimento di consolidamento fiscale in un modello DSGE con partecipazione limitata ai mercati degli assets. I risultati mostrano che durante un processo di consolidamento fiscale riduzioni temporanee delle tasse o aumenti temporanei di transfers consentono sia di ridurre il debito che stimolare il consumo.
1. Can a DSGE Model Explain a Costly Disinflation? This paper shows that a medium scale DSGE model fails to explain a costly disinflation with low and partial indexation of prices and wages. Alternatively to Calvo (1982) price setting, with the Rotemberg (1982) framework the model can replicate a recessionary disinflation for any indexation degree. 2. Disinflation and Inequality in a DSGE Monetary Model: A Welfare Analysis This paper investigates the redistributive effects of a disinflation experiment in a standard DSGE model with Limited Asset Market Participation. There are two key mechanisms driving consumption and income distribution: firms’ markup and the cash in advance channel. Results show that disinflation unambiguously increases inequality under Rotemberg. Under Calvo this effect only obtains if the cash in advance doesn’t bind firms ability to finance their working capital. 3. Fiscal Consolidation and Rule of Thumb Consumers: Gain With or Without Pain? This paper simulates a fiscal consolidation in a medium scale DSGE model augmented with Limited Asset Market Participation. Results show that during the consolidation process temporary tax reductions or temporary transfer increases allow to both reduce public debt and boost consumption. A countercyclical monetary policy is an effective complement to fiscal policy as stabilization tool.
APA, Harvard, Vancouver, ISO, and other styles
28

Sun, Bo. "Modelling of Interaction Units." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2859.

Full text
Abstract:

Developing a model of a service system and mobile units including cellphone, PDA, Laptop is an important preliminary step of designing the systems which could provide these units some convenient and entertainment services through common short range communication like blue tooth, wireless LAN, etc.

In this project, an ontology is created to represent this model. Meanwhile, some basic service rules are also programmed and combined with this ontology can be used to simulate some interactions between items inside this model.

The description of this model (ontology) has been made through Protégé and demonstrated by using its graphical interface. The rules have been created by using Jess and implemented with the ontology by using JessTab.

APA, Harvard, Vancouver, ISO, and other styles
29

Chevapatrakul, Thanaset. "Modelling and forecasting the performance of monetary policy rules for the United Kingdom." Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Abdelsalam, Mamdouh Abdelmoula Mohamed. "Essays on optimal inflation targeting forecast based rules and inflation modelling under uncertainty." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37424.

Full text
Abstract:
This thesis focuses on exploring the most efficient forecast based rules for Inflation Targeting (IT) regime; modelling and forecasting inflation. Therefore, it is divided into four empirical chapters after the introduction as the first chapter follows the second chapter explores the most efficient alternative forecast based rules in the context of an estimated Global Projection Model for the Egyptian Central Bank’s IT period. In addition to the traditional Inflation Forecast Based (IFB) rule, the chapter augments this rule with other variables like the expected exchange rate or output gap or both. Also, it proposes a structural Backward-Forward rule (BF) which implies more dynamics for the monetary policy rule and it encompasses other common rules. The third chapter discusses modelling and forecasting inflation from Phillips Curve (PC) under misspecification. It considers various econometric specifications, estimation methods, and different measures of business cycles. Then, we propose a Time Varying Coefficient Phillips Curve (TVCPC) which is more sophisticated and informative and, also, it acts as a tool to make the gap between the actual specification and the estimated one as small as possible. The fourth chapter considers: modelling the density of quarterly inflation by using a time-varying higher order moment’s model developed by Leon, Rubio, and Serna (2005); and isolating the time-varying conditional correlations between inflation and both the growth in domestic credit and the real exchange rate by using two multivariate GARCH models. The fifth chapter focuses on improving inflation forecasts through combining some linear and non-linear models by using both traditional and other proposed sophisticated time varying combination approaches. We find that the BF rule is the superior welfare policy under all policy scenarios. With regard to the IFB rule, we conclude that: the augmented versions with the expected exchange rate are preferable to the IFB rule; the Time Varying Coefficient Phillips Curve with HP output gap (TVCPC_HP) produces the best forecasting accuracy; and models with time-invariant volatility, skewness and kurtosis are inferior to the models with time-varying higher order moments. Moreover, in comparison to static models, dynamic multivariate models can provide rich information related to inflation dynamics and forecasts. Further, the proposed time varying combination approaches dominate all individual models and all other static combination schemes.
APA, Harvard, Vancouver, ISO, and other styles
31

Conlon, Thomas Hugh. "Beyond rules : development and evaluation of knowledge acquisition systems for educational knowledge-based modelling." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/7514.

Full text
Abstract:
The technology of knowledge-based systems undoubtedly offers potential for educational modelling, yet its practical impact on today's school classrooms is very limited. To an extent this is because the tools presently used in schools are EMYCIN -type expert system shells. The main argument of this thesis is that these shells make knowledge-based modelling unnecessarily difficult and that tools which exploit knowledge acquisition technologies empower learners to build better models. We describe how such tools can be designed. To evaluate their usability a model-building course was conducted in five secondary schools. During the course pupils built hundreds of models in a common range of domains. Some of the models were built with an EMYCIN -type shell whilst others were built with a variety of knowledge acquisition systems. The knowledge acquisition systems emerged as superior in important respects. We offer some explanations for these results and argue that although problems remain, such as in teacher education, design of classroom practice, and assessment of learning outcomes, it is clear that knowledge acquisition systems offer considerable potential to develop improved forms of educational knowledge-based modelling.
APA, Harvard, Vancouver, ISO, and other styles
32

Maragoudaki, Eleni. "Models, rules and behaviours : investigating young children's modelling abilities using an educational computer program." Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/10000051/.

Full text
Abstract:
A model can be built to represent aspects of the world establishing at the same time a world on its own. It might be considered in terms of its relation to the world or as an artefact having an identity related to the nature and kind of the modelling tool used to make it. The present research focuses on models being built by a computer-based modelling tool called WorldMaker (WM), which allows models to be built in terms of objects and the actions they perform. It is intended to be accessible to younger pupils. Therefore, children from the last years of primary and the first years of secondary education (aged 10-14) participated in the research. The research was carried out in three stages. The preliminary study aimed to explore children’s ability to use WM, as well as possibilities for the kinds of tasks that might be used with it. The first main study focused on rules, which define actions in WM, and their meaning for children. It mainly investigated children’s understanding, use and thinking about models in the form of WM rules. The second main study looked into children’s ability to think of situations in terms of structures as well as their understanding about the relation between models and reality. Its primary concern was to find out if children think about situations presented as stories or computer models in the ‘modelling’ way required by WM, that is, in terms of objects and the actions they perform. In the research tasks the children were called on to approach the modelling process by creating or exploring a model, as well as by describing and explaining the formal behaviour of a model or interpreting the meaning of it. It was found that the children were able to use WM as a modelling tool; they could represent actions in the form of a WM rule and they were able to think of situations in terms of objects and actions. Besides, the relation between models and reality is an issue when young children are involved with the modelling process.
APA, Harvard, Vancouver, ISO, and other styles
33

Witt, Johannes [Verfasser]. "Modelling and Analysis of the NF-kappaB Signalling Pathway and Development of a Thermodynamically Consistent Modelling Approach for Reaction Rules / Johannes Witt." Aachen : Shaker, 2012. http://d-nb.info/105240832X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hledík, Tibor. "Comparation of Alterantive Policy Rules in a Structural Model of the Czech Republic." Doctoral thesis, Vysoká škola ekonomická v Praze, 2003. http://www.nusl.cz/ntk/nusl-76848.

Full text
Abstract:
The main goal of this thesis has been a study of alternative policy rules in a small structural model calibrated to capture the Czech economy. After the overview of the historic development of economic theory and structural modeling we have specified a small open economy model that has served as a main technical tool for the analysis. The model represents a framework, where forward-looking model-consistent expectations are formed with respect to the development of the exchange rate and interest rates. Inflation expectations are forward looking too with some nominal rigidities in inflation dynamics. The model's structure is relatively simple. The IS curve captures the dynamics of real GDP, that exhibits real rigidity, motivated by habit formation or investment adjustment costs. In our specification the real GDP is a function of (the deviation of) real XR, real IR and foreign demand (from corresponding equilibrium levels). The Phillips-curve is based on the F-M type wage setting behavior, therefore it enables to consider domestic prices, that are modeled as mark-ups over wages. CPI inflation then consists of domestic, imported and administered inflation, including the effect of any indirect taxes changes. The exchange rate is modeled by the UIP arbitrage condition. Exchange rate expectations are forward-looking, but with some inertia in expectation formation. Interest rates with one year maturity are also modeled as an arbitrage condition on the money market, they are fully model-consistently forward looking. The model is closed by a Taylor-type forward-looking policy rule. The interest rate exhibits some inertia and feeds back from deviation of inflation from target and output from its equilibrium. The specification (parameterization) of the rule is general enough to examine CPI and domestic inflation targeting. The model specification has been followed by empirical work leading towards the implementation of the previously specified model on Czech data. Based on the sources of the Czech Statistical Office, Czech National Bank, Consensus Economics Inc., we first processed the data by executing seasonal adjustment and other transformations necessary for being consistent with the definition of model variables. The database has been created by an automatic MATLAB based routine, therefore the calculations were relatively easy to update. The database being completed, we have set up a Kalman-filter for determining equilibrium values for the real interest rate, exchange rate and output. At the same time through Kalman filtering we identified all model residuals. We paid special attention to the decomposition of the output gap and discussing In order to assess the overall dynamic properties of the model and judge how well the model fits the data, we conducted several exercises. First we decomposed some of the important endogenous variables of the model to shocks to see, whether the identified shocks are in line with our intuition and episodes of the recent Czech economic history. We found, that the shocks are not in contrast with some of the clearly distinguishable episodes. After the shock decomposition we run in-sample simulations to see, how well the model is able to fit the reality two years ahead. We found the overall results quite encouraging. We were able to fit quite well the output gap as well as MP inflation. Domestic inflation has been slightly more inertial in model simulations than in reality, but even in this case the results were acceptable. The model was not able to fit the 2001-2 appreciation of the nominal XR footnote{Understandably it neither forecasted well the fast fall in inflation after the appreciation period.}, which is not a big surprise. The model calibration part of the thesis concludes, that the model fits the data and economic story reasonably well.
APA, Harvard, Vancouver, ISO, and other styles
35

McKellar, Dougan Kelk. "A dislocation model of plasticity with particular application to fatigue crack closure." Thesis, University of Oxford, 2001. http://ora.ox.ac.uk/objects/uuid:45183b90-017f-4ac1-9550-94772a0ca88b.

Full text
Abstract:
The ability to predict fatigue crack growth rates is essential in safety critical systems. The discovery of fatigue crack closure in 1970 caused a flourish of research in attempts to simulate this behaviour, which crucially affects crack growth rates. Historically, crack tip plasticity models have been based on one-dimensional rays of plasticity emanating from the crack tip, either co-linear with the crack (for the case of plane stress), or at a chosen angle in the plane of analysis (for plane strain). In this thesis, one such model for plane stress, developed to predict fatigue crack closure, has been refined. It is applied to a study of the relationship between the apparent stress intensity range (easily calculated using linear elastic fracture mechanics), and the true stress intensity range, which includes the effects of plasticity induced fatigue crack closure. Results are presented for all load cases for a finite crack in an infinite plane, and a method is demonstrated which allows the calculation of the true stress intensity range for a growing crack, based only on the apparent stress intensity range for a static crack. Although the yield criterion is satisfied along the plastic ray, these one-dimensional plasticity models violate the yield criterion in the area immediately surrounding the plasticity ray. An area plasticity model is therefore required in order to model the plasticity more accurately. This thesis develops such a model by distributing dislocations over an area. Use of the model reveals that current methods for incremental plasticity algorithms using distributed dislocations produce an over-constrained system, due to misleading assumptions concerning the normality condition. A method is presented which allows the system an extra degree of freedom; this requires the introduction of a parameter, derived using the Prandtl-Reuss flow rule, which relates the magnitude of slip on complementary shear planes. The method is applied to two problems, confirming its validity.
APA, Harvard, Vancouver, ISO, and other styles
36

Cura, Rémi. "Inverse procedural Street Modelling : from interactive to automatic reconstruction." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1034/document.

Full text
Abstract:
La population mondiale augmente rapidement, et avec elle, le nombre de citadins, ce qui rend d'autant plus importantes la planification et la gestion des villes.La gestion "intelligente" de ces villes et les nombreuses applications (gestion, tourisme virtuel, simulation de trafic, etc.) nécessitent plus de données réunies dans des modèles virtuels de villes.En milieu urbain, les rues et routes sont essentielles par leur rôle d'interface entre les espaces publics et privés, et entre ces différents usages.Il est difficile de modéliser les rues (ou de les reconstruire virtuellement) car celles-ci sont très diverses (de par leur forme, fonction, morphologie), et contiennent des objets très divers (mobilier, marquages, panneaux).Ce travail de thèse propose une méthode (semi-) automatique pour reconstruire des rues en utilisant le paradigme de la modélisation procédurale inverse dont le principe est de générer un modèle procéduralement, puis de l'adapter à des observations de la réalité.Notre méthode génère un premier modèle approximatif - à partir de très peu d'informations (un réseau d'axes routiers + attributs associés) - assez largement disponible.Ce modèle est ensuite adapté à des observations de façon interactive (interaction en base compatible avec les logiciels SIG communs) et (semi-) automatique (optimisation).L'adaptation (semi-) automatique déforme le modèle de route de façon à ce qu'il corresponde à des observations (bords de trottoir, objets urbains) extraites d'images et de nuages de points.La génération (StreetGen) et l'édition interactive se font dans un serveur de base de données ; de même que la gestion des milliards de points Lidar (Point Cloud Server).La génération de toutes les rues de la ville de Paris prends quelques minutes, l'édition multi-utilisateurs est interactive (<0.3 s). Les premiers résultats de l'adaptation (semi-) automatique (qq minute) sont prometteurs (la distance moyenne à la vérité terrain passe de 2.0 m à 0.5 m).Cette méthode, combinée avec d'autres telles que la reconstruction de bâtiment, de végétation, etc., pourrait permettre rapidement et semi automatiquement la création de modèles précis et à jour de ville
World urban population is growing fast, and so are cities, inducing an urgent need for city planning and management.Increasing amounts of data are required as cities are becoming larger, "Smarter", and as more related applications necessitate those data (planning, virtual tourism, traffic simulation, etc.).Data related to cities then become larger and are integrated into more complex city model.Roads and streets are an essential part of the city, being the interface between public and private space, and between urban usages.Modelling streets (or street reconstruction) is difficult because streets can be very different from each other (in layout, functions, morphology) and contain widely varying urban features (furniture, markings, traffic signs), at different scales.In this thesis, we propose an automatic and semi-automatic framework to model and reconstruct streets using the inverse procedural modelling paradigm.The main guiding principle is to generate a procedural generic model and then to adapt it to reality using observations.In our framework, a "best guess" road model is first generated from very little information (road axis network and associated attributes), that is available in most of national databases.This road model is then fitted to observations by combining in-base interactive user edition (using common GIS software as graphical interface) with semi-automated optimisation.The optimisation approach adapts the road model so it fits observations of urban features extracted from diverse sensing data.Both street generation (StreetGen) and interactions happen in a database server, as well as the management of large amount of street Lidar data (sensing data) as the observations using a Point Cloud Server.We test our methods on the entire Paris city, whose streets are generated in a few minutes, can be edited interactively (<0.3 s) by several concurrent users.Automatic fitting (few m) shows promising results (average distance to ground truth reduced from 2.0 m to 0.5m).In the future, this method could be mixed with others dedicated to reconstruction of buildings, vegetation, etc., so an affordable, precise, and up to date City model can be obtained quickly and semi-automatically.This will also allow to such models to be used in other application areas.Indeed, the possibility to have common, more generic, city models is an important challenge given the cost an complexity of their construction
APA, Harvard, Vancouver, ISO, and other styles
37

Frida, Thuresson. "Extremväders påverkan på jorderosion : En GIS-modellering över Byälvens avrinningsområde." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446019.

Full text
Abstract:
When the climate change contributes to a greater amount of extreme weather that does also affect the soil erosion and the number of areas that are being affected by the soil erosion. The areas affected by the soil erosion do often suffers from economic loss because of the loss in agricultural land caused by the soil erosion. The reason being that the soil losses nutrients something that leads to less harvest and poorer quality of the water. This bachelor thesis aims to study how soil erosion gets affected by the climate change, and more specific the extreme weather, by using a RUSLE analysis.  By knowing in what extent different areas most likely get affected by soil erosion, there is great possibilities to protect those areas by protection against erosion. In this thesis three analysis have been performed over areas located in Byälvens runoff-areas to extend the knowledge about how extreme weather affect the areas and out of that get a guideline for how similar areas might be affected. As a data source for the analysis Lantmäteriet, SGU and SMHI was used.  The result shows that three analysed areas in Byälvens runoff-areas most likely will lose 93-97 percent more sediment if the precipitation increases with 50 percent compared to 2020 annual value compared to the precipitation between the years of 1961 and 1991. This could be because of the climatological factors that affect the result and the increasement in land loss due to soil erosion. The increasement in soil erosion can depend on human activities that changes the soil structure, climatological factors, but it can also be because of the coal in soil.
När klimatförändringarna bidrar till ett ökat antal extremväder så påverkar det jorderosionen och ökar andelen områden som påverkas av jorderosion. Områden som drabbas av jorderosion påverkas ofta negativt ekonomiskt eftersom resultatet av jorderosion ofta är i form av förluster av odlade områden, minskade näringsämnen vilket påverkar framtida skördar samt ger sämre vattenkvalitet. Detta kandidatarbete syftar till att studera hur jorderosion påverkas av klimatförändringarna, med fokus på extremväder, främst med hjälp av en RUSLE-analys. Genom att veta i vilken omfattning olika områden sannolikt kan bli drabbade av jorderosion så finns det goda möjligheter att förhindra en ökad jorderosion genom erosionsskydd. I det här arbetet gjordes det analyser över tre områden som är lokaliserade i Byälvens avrinningsområde för att förstå hur dessa kan bli påverkade vid en ökad andel extremväder eftersom det kan ge en riktlinje för hur andra liknande områden påverkas. Som datakälla till analysen användes Lantmäteriet, SGU och SMHI. Resultatet visar på att de tre analyserade områdena i Byälvens avrinningsområde sannolikt kommer att förlora mellan 93–97% mer sediment om nederbördsmängderna i området ökar med 50% jämfört med 2020-årsvärde i jämförelse med genomsnittsnederbörden mellan åren 1961 och 1991. Detta kan bero på de ökade klimatologiska faktorerna som påverkar och ökar jorderosionen i de undersökta områdena. Ökade jorderosioner på en plats som tidigare inte haft några omfattande jorderosions händelser kan bero på mänskliga aktiviteter som förändrar jordens struktur, klimatologiska faktorer men, även på det organiskt bundna kolet i jordarten.
APA, Harvard, Vancouver, ISO, and other styles
38

Harrison, Gillian. "New fuels, new rules? : modelling policies for the uptake of low carbon vehicles within an ethical framework." Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/6294/.

Full text
Abstract:
This research recognises a conflict between climate change mitigation and a lock-in of carbon-intense lifestyles. The concern is that, in the short term, policies to aid the transition to Low Carbon Vehicles (LCVs) may bring about unacceptable impacts on those amongst the worst-off. The approach that is taken is interdisciplinary, and suggests a novel approach to policy appraisal through combining ethics with a system dynamic model. An ethical framework is established that claims coercive LCV policies are permitted due to the harms of climate change, but certain groups require protection from impacts on car ownership. This protection could be similar to policy in other sectors, such as tax exemptions for the worst-off. The framework also improves on current practise by offering a new perspective on the limitations of models in policy-making. Two model case studies examine LCV policies, focused on subsidies and market regulation of electric vehicles. The first is relatively simple and develops basic skills and understanding, but still gives policy insight and explores the sensitivity of results. A more complex second model is used to understand the policy impacts in more detail, and in relation to ethical concerns. Combining the findings of both models suggests that subsidies are only successful in reducing emissions under a failing market and although regulation is more successful it raises the cost of all vehicles, disproportionately impacting the poorest in society. Combining these policies will allow a more even distribution of burdens. From this, recommendations are made that suggest the policy-maker needs to ensure affordability, protect the vulnerable and distribute burdens. Finally, a framework for an improved approach to modelling and policy appraisal, which incorporates ethics, is proposed. Although the focus of this work is on LCVs, the fundamental approach is transferable to other areas of transport and energy use.
APA, Harvard, Vancouver, ISO, and other styles
39

Bright, John Charles. "Optimal control of irrigation systems : an analysis of water allocation rules." Lincoln College, University of Canterbury, 1986. http://hdl.handle.net/10182/2089.

Full text
Abstract:
A feasibility study of an irrigation development proposal should include an analysis of the effects of water supply conditions on the degree to which development objectives are expected to be realised. A method of making this analysis was developed based on procedures for solving two problems. These were; (a) optimally allocating a property's available supply of water among competing crops, and, (b) optimally controlling an open channel distribution system to meet temporally and spatially varying water demand. The procedure developed for solving (a) was applied. A stochastic dynamic programming procedure was developed to optimally schedule the irrigation of a single crop, subject to constraints on the timing of water availability and total application depth. A second procedure was developed, employing a constrained differential dynamic programming algorithm, for determining optimal irrigation schedules for use with variable application depth systems, and when several crops compete for an intra-seasonally limited supply of water. This procedure was called, as frequently as water supply conditions allowed, to provide short-term irrigation schedules in a computer simulation of the optimal irrigation of several crops. An application system model was included in these procedures to transform a crop water-use production function into the required irrigation water-use production function. This transformation was a function of the application device type and the mean application depth. From an analysis of the on-property effects of water supply conditions, it was concluded that in order to achieve high economic and irrigation efficiencies, water supply conditions must be sufficiently flexible to allow the application system operator to vary the mean application depth but not necessarily the time periods of water availability. Additionally, irrigation scheduling procedures which seek economically optimum strategies offer the potential to achieve a maximum level of net benefit at levels of water availability significantly lower than has previously been used for design purposes.
APA, Harvard, Vancouver, ISO, and other styles
40

Su, Yixiang. "Analyses of Two Ice Class Rules : for The Design Process of a Container Ship." Thesis, KTH, Marina system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214992.

Full text
Abstract:
During ice voyages, level ice and iceberg with huge inertia force can cause large deformation and even damage on the ship hull structure. Hence the hull structure for ice voyage requires higher strength than it for open water voyages. A container ship will be re-designed for ice voyages in the thesis. Generally, the ice strength is evaluated in ice class rules. IACS polar class and FSICR are adopted in this thesis. Ice class rules are based on experience and experiment data, but there has been no exact formula or parameters to described the ice properties so far. In other words, the results from ice class rules include uncertainties. In order to improve physical understanding, non-linear FE simulations will be processed after the re-design. In the simulations, the ship has a collision with different ice scenarios. The simulations are carried on ANSYS Workbench Explicit Dynamic using the solver of Auto-dyna. Afterwards, the results from the two designs schemes are compared and analysed.
APA, Harvard, Vancouver, ISO, and other styles
41

Oshurko, Ievgeniia. "Knowledge representation and curation in hierarchies of graphs." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN024.

Full text
Abstract:
L'extraction automatique des intuitions et la construction de modèles computationnels à partir de connaissances sur des systèmes complexes repose largement sur le choix d'une représentation appropriée. Ce travail s'efforce de construire un cadre adapté pour la représentation de connaissances fragmentées sur des systèmes complexes et sa curation semi-automatisé.Un système de représentation des connaissances basé sur des hiérarchies de graphes liés à l'aide d'homomorphismes est proposé. Les graphes individuels représentent des fragments de connaissances distincts et les homomorphismes permettent de relier ces fragments. Nous nous concentrons sur la conception de mécanismes mathématiques,basés sur des approches algébriques de la réécriture de graphes, pour la transformation de graphes individuels dans des hiérarchies qui maintient des relations cohérentes entre eux.De tels mécanismes fournissent une piste d'audit transparente, ainsi qu'une infrastructure pour maintenir plusieurs versions des connaissances.La théorie développée est appliquée à la conception des schémas pour les bases de données orientée graphe qui fournissent des capacités de co-évolution schémas-données.Ensuite, cette théorie est utilisée dans la construction du cadre KAMI, qui permet la curation des connaissances sur la signalisation dans les cellules. KAMI propose des mécanismes pour une agrégation semi-automatisée de faits individuels sur les interactions protéine-protéine en corpus de connaissances, la réutilisation de ces connaissances pour l'instanciation de modèles de signalisation dans différents contextes cellulaires et la génération de modèles exécutables basés sur des règles
The task of automatically extracting insights or building computational models fromknowledge on complex systems greatly relies on the choice of appropriate representation.This work makes an effort towards building a framework suitable for representation offragmented knowledge on complex systems and its semi-automated curation---continuouscollation, integration, annotation and revision.We propose a knowledge representation system based on hierarchies of graphs relatedwith graph homomorphisms. Individual graphs situated in such hierarchies representdistinct fragments of knowledge and the homomorphisms allow relating these fragments.Their graphical structure can be used efficiently to express entities and their relations. Wefocus on the design of mathematical mechanisms, based on algebraic approaches to graphrewriting, for transformation of individual graphs in hierarchies that maintain consistentrelations between them. Such mechanisms provide a transparent audit trail, as well as aninfrastructure for maintaining multiple versions of knowledge.We describe how the developed theory can be used for building schema-aware graphdatabases that provide schema-data co-evolution capabilities. The proposed knowledgerepresentation framework is used to build the KAMI (Knowledge Aggregation and ModelInstantiation) framework for curation of cellular signalling knowledge. The frameworkallows for semi-automated aggregation of individual facts on protein-protein interactionsinto knowledge corpora, reuse of this knowledge for instantiation of signalling models indifferent cellular contexts and generation of executable rule-based models
APA, Harvard, Vancouver, ISO, and other styles
42

Bloomberg, Mark. "Modelling germination and early seedling growth of radiata pine." Lincoln University, 2008. http://hdl.handle.net/10182/681.

Full text
Abstract:
Background: This study seeks to model aspects of the regeneration of radiata pine (Pinus radiata D.Don) seedlings under a range of environmental conditions. This study investigated whether “hybrid” mechanistic models, which predict plant growth and development using empirical representations of plant physiological responses to the environment, could provide a realistic alternative to conventional empirical regeneration models. Objectives: The objectives of this study were to 1) identify the functional relationships between the environmental conditions controlling germination, establishment and growth of radiata pine seedlings, under a range of those environmental conditions as specified by temperature and available light and soil water; and 2) specify those functional relationships in hybrid mechanistic (“hybrid”) models. Methods: Radiata pine seedling germination and growth were measured under controlled environmental conditions (incubators for seed germination, growth cabinets for seedlings), and results used to adapt, parameterise and test two published hybrid models; one for germination (the hydrothermal time model); and one for seedling growth in the first six months after germination, based on plant radiation use efficiency (RUE). The hydrothermal model was tested by incubating commercial radiata pine seeds under factorial combinations of temperature and water potentials where germination was likely to occur (12.5 ºC to 32.5 ºC and 0 MPa to –1.2 MPa.). 100 seeds were germinated for each factorial combination. The hydrothermal germination model was fitted to the germination data using non-linear regression modles, will allowed simultaneous estimation of all modle parameters. Seedlings were grown in controlled growth cabinets, and their RUE was calculated as the ratio of net primary production (NPP, specified in terms of an increase in oven dry biomass), to PAR intercepted or absorbed by a seedling. Estimation of seedling RUE required development of novel techniques for non-destructive estimation of seedling oven dry weight, and measurement of PAR interception by seedlings. The effect of varying PAR flux density on RUE was tested by measuring RUE of seedlings grown at 125, 250 and 500 µmol m⁻² s⁻¹. In a second experiment, the effect of deficits in available soil water on RUE was tested by measuring RUE of seedlings grown under 250 µmol m⁻² s⁻¹ PAR flux, and at different levels of available soil water. Available soil water was specified by a soil moisture modifier factor (ƒθ) which ranges between 1 for moist soils and 0 for soils where there is insufficient water for seedling growth. This soil moisture modifier had not previously been applied in studies of tree seedling growth. Temperatures for both seedling experiments were a constant 17.5 ºC (day) and 12.5 ºC (night). Results: Hydrothermal time models accurately described radiata pine seed germination. Model predictions were closely correlated with actual seed germination over the full range of temperature and water potentials where germination was likely to occur (12.5 ºC to 32.5 ºC and 0 MPa to –1.2 MPa. The minimum temperature for germination (base temperature) was 9.0 ºC. Optimum temperatures for germination ranged from ~20ºC for slow-germinating seeds to ~27 ºC for the fastest germinating seeds. The minimum water potential for seed germination varied within the seed population, with an approximately normal distribution (base water potential = –1.38 MPa, standard deviation of 0.48 MPa). In the process of developing the model, a novel explanation for the decline in germination rates at supra-optimal temperatures was developed (Section 3.4.6), based on earlier models proposed by Alvarado & Bradford (2002) and Rowse & Finch-Savage (2003). This explanation was that the decline in germination rate was not driven just by temperature, but by accumulated hydrothermal time above the base temperature for germination (T₀). This in turn raised the base soil water potential (Ψb) towards 0, so that the reduction in germination rate arose from a reduced accumulation of hydro-time, rather than from thermal denaturation of enzymes facilitating germination – the conventional explanation for non-linear accumulation of thermal time at supra-optimal temperatures for plant development. Upwards adjustment (towards 0 MPa) of base water potentials of germinating seeds occurred also at very cold temperatures in combination with high water potentials. In both cases (very cold or else supra-optimal temperatures) this upwards adjustment in base water potentials prevented germination of part of the seed population, and is proposed as a mechanism which enables seed populations to “hedge their bets” when germinating under less than ideal germination conditions. RUE of young germinated radiata pine seedlings growing in a controlled growth cabinet was not significantly different over a range of constant PAR flux densities. Mean RUE’s were 3.22, 2.82 and 2.58 g MJ⁻¹ at 125, 250 and 500 µmol m⁻² s⁻¹ respectively. In the second experiment, the novel use of a soil moisture modifier (ƒθ) to predict RUE of seedlings subjected to water stress proved successful within a limited range of soil water stress conditions. Measured seedling transpiration and stomatal conductance were closely correlated but seedling photosynthesis was less correlated with available soil water. This result suggests that photosynthesis was not coupled with stomatal conductance when PAR flux was 250 µmol m⁻² s⁻¹, which is well below saturating irradiance for C₃ plants. Conclusions: The use of hybrid, quasi-mechanistic models to describe tree seedling growth has been seldom explored, which necessitated the development of novel experimental and analytical techniques for this study. These included a predictive model of germination decline at sub- and supra-optimal temperatures; a method for accurately estimating seedling dry weights under a range of PAR flux densities; and a novel method for estimating light interception by small seedlings. The work reported in this thesis showed that existing hybrid models (the hydrothermal time germination model and the RUE model) can be adapted to model germination and growth of radiata pine seedlings under controlled environmental conditions. Nonetheless, further research is needed before the models can be confidently used as an alternative to conventional empirical models to model regeneration in “real-world” forests. Research priorities are the performance of hydrothermal germination models under variable field conditions, and the use of the soil moisture modifier for seedlings growing on a range of soil textures and under a range of PAR fluxes.
APA, Harvard, Vancouver, ISO, and other styles
43

Dzierzon, Helge. "Development of methods for characterizing plant and stand architectures and for model comparisons." Doctoral thesis, [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=970833229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Peng, Yong. "Modelling and designing IT-enabled service systems driven by requirements and collaboration." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00737773.

Full text
Abstract:
Compared to traditional business services, IT-enabled services provide more value to customers and providers by enabling traditional business services with Information and Communication Technologies (ICT) and delivering them via e-channels (i.e., Internet, Mobile networks). Although IT-enabled service systems help in co-creating value through collaboration with customers during service design and delivery, they raise challenges when we attempt to understand, design and produce innovative and intelligent IT-enabled services from a multi-disciplinary perspective by including businesses, technology and people for value addition and increasing benefits. Due to their social-technical nature and characteristics (i.e., Intangibility, Inseparability, Perishability, Simultaneity), IT-enabled services also lack common methods to systemize services driven by customer requirements and their satisfactions and co-produce them through ad-hoc collaboration. In this thesis, we propose a middle-out methodology to model, design and systemize advanced IT-enabled service driven by customer requirements and collaboration among all actors to jointly co-create service systems. From a multi-disciplinary perspective, the methodology relies on a multi-view models including a service system reference model, a requirement model and a collaboration model to ensure system flexibility and adaptability to requirement changes and take into account joint efforts and collaboration of all service actors. The reference model aims at a multi-disciplinary description of services (ontological, systematical and characteristic-based descriptions), and formalizing business knowledge related to different domains. As for the requirement model, customer needs are specified in common expressiveness language understandable by all service actors and made possible its top-down propagation throughout service lifecycle and among actors. The collaboration model advocates a data-driven approach, which increases busi-ness, technical and semantic interoperability and exhibits stability in comparison to business processes centric approaches. Finally, the collaboration hinges on de-livery channels expressed as data flows and encapsulating business artifacts as per which business rules are generated to invoke underlying software components.
APA, Harvard, Vancouver, ISO, and other styles
45

YILDIRIM, NURSEDA. "TIME SERIES MODELLING FOR WIND POWER PREDICTION AND CONTROL : CLUSTERING AND ASSOCIATION RULES OF DATA MINING FOR CFD AND TIME SERIES DATA OF POWER RAMPS." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-245304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ruge, Marcus [Verfasser], and Hans Gerhard [Akademischer Betreuer] Strohe. "Stimmungen und Erwartungen im System der Märkte : eine Analyse mit DPLS-Modellen [[Elektronische Ressource]] / Marcus Ruge. Betreuer: Hans Gerhard Strohe." Potsdam : Universitätsbibliothek der Universität Potsdam, 2011. http://d-nb.info/1017643180/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Nguyen, Thanh Thi [Verfasser], and Georg [Akademischer Betreuer] Cadisch. "Land use change and its impact on soil properties using remote sensing, farmer decision rules and modelling in rural regions of Northern Vietnam / Thanh Thi Nguyen ; Betreuer: Georg Cadisch." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2019. http://d-nb.info/1180492188/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Wynn, Moe Thandar. "Semantics, verification, and implementation of workflows with cancellation regions and OR-joins." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16324/.

Full text
Abstract:
Workflow systems aim to provide automated support for the conduct of certain business processes. Workflow systems are driven by workflow specifications which among others, capture the execution interdependencies between various activities. These interdependencies are modelled by means of different control flow constructors, e.g., sequence, choice, parallelism and synchronisation. It has been shown in the research on workflow patterns that the support for and the interpretation of various control flow constructs varies substantially across workflow systems. Two of the most problematic patterns relate to the OR-join and to cancellation. An OR-join is used in situations when we need to model " wait and see" behaviour for synchronisation. Different approaches assign a different (often only intuitive) semantics to this type of join, though they do share the common theme that synchronisation is only to be performed for active paths. Depending on context assumptions this behaviour may be relatively easy to deal with, though in general its semantics is complicated, both from a definition point of view (in terms of formally capturing a desired intuitive semantics) and from a computational point of view (how does one determine whether an OR-join is enabled?). Many systems and languages struggle with the semantics and implementation of the OR-join because its non-local semantics require a synchronisation depending on an analysis of future execution paths. This may require some non-trivial reasoning. The presence of cancellation features and other OR-joins in a workflow further complicates the formal semantics of the OR-join. The cancellation feature is commonly used to model external events that can change the behaviour of a running workflow. It can be used to either disable activities in certain parts of a workflow or to stop currently running activities. Even though it is possible to cancel activities in workflow systems using some sort of abort function, many workflow systems do not provide direct support for this feature in the workflow language. Sometimes, cancellation affects only a selected part of a workflow and other activities can continue after performing a cancellation action. As cancellation occurs naturally in business scenarios, comprehensive support in a workflow language is desirable. We take on the challenge of providing formal semantics, verification techniques as well as an implementation for workflows with those features. This thesis addresses three interrelated issues for workflows with cancellation regions and OR-joins. The concept of the OR-join is examined in detail in the context of the workflow language YAWL, a powerful workflow language designed to support a collection of workflow patterns and inspired by Petri nets. The OR-join semantics has been redesigned to represent a general, formal, and decidable approach for workflows in the presence of cancellation regions and other OR-joins. This approach exploits a link that is proposed between YAWL and reset nets, a variant of Petri nets with a special type of arc that can remove all tokens from a place. Next, we explore verification techniques for workflows with cancellation regions and OR-joins. Four structural properties have been identified and a verification approach that exploits coverability and reachability notions from reset nets has been proposed. The work on verification techniques has highlighted potential problems with calculating state spaces for large workflows. Applying reduction rules before carrying out verification can decrease the size of the problem by cutting down the size of the workflow that needs to be examined while preserving some essential properties. Therefore, we have extended the work on verification by proposing reduction rules for reset nets and for YAWL nets with and without OR-joins. The proposed OR-join semantics as well as the proposed verification approach have been implemented in the YAWL environment.
APA, Harvard, Vancouver, ISO, and other styles
49

DELPIAZZO, ELISA. "La partecipazione del Mozambico al SADC. Un processo di liberalizzazione attraverso diversi modelli e diverse chiusure." Doctoral thesis, Università Cattolica del Sacro Cuore, 2011. http://hdl.handle.net/10280/1109.

Full text
Abstract:
La scelta del modeller riguardo alla chiusura del modello CGE influenza i suoi risultati finali e le sue prescrizioni di policy. In questa tesi, lo scopo è l’analisi e l’identificazione del problema, sia attraverso una discussione teorica che un’ applicazione pratica. Dall’articolo del 1963 di Amartya Sen in poi, la letteratura ha presentato vari articoli sull’argomento. Attualmente, il problema delle chiusure del modello non appare più centrale nel dibattito. Dopo una breve introduzione su cosa siano i CGE, il loro sviluppo e la loro struttura, è presentata una serie di esemplificative maquette con lo scopo di introdurre al concetto di chiusura, come essa influenzi i risultati e come questa scelta del modeller sia intimamente connessa ai fondamenti macroeconomici del sistema. Dopo la teoria, ci si sposta nel mondo reale analizzando con differenti modelli (Neoclassico, “Bastardo Keynesiano”, Strutturalista/Post- Keynesiano) e diverse chiusure per gli aggregati macroeconomici (risparmi privati, pubblici, e stranieri) l’impatto dell’accordo regionale SADC sull’economia mozambicana. I modelli CGE per il Mozambico sono calibrati su una SAM del 2003 e sono svolti con l’ausilio di GAMS/MPSGE. I risultati dimostrano che la chiusura influenza i risultati stessi del modello per cui ognuno presenta una serie di raccomandazioni politiche per l’applicazione dell’accordo SADC.
Modellers’ choice on closure rules affects a CGE model results and consequently its policy prescriptions. In this thesis, the aim is to detect and assess this issue, both through a theoretical discussion and an empirical application. Starting from Amartya Sen’s 1963 paper, literature presents many contributions on this topic. Currently, the closure rule problem is not central in the CGE debate. After a brief introduction on CGEs, their development and their structure, a series of simple maquette is presented. They have the exemplary role of introducing the concept of closures, explain how they affect final outcomes and how this modeller’s choice is strictly connected to the macroeconomic foundation of the economic system. After theory, we move into the real World analyzing through different models (Neoclassical, “Bastard Keynesian”, and Structuralist/ Post- Keynesian), and through different closure rules for macro- aggregates (private, public and foreign savings) the impact of the Regional Trade Agreement of SADC with respect to the Mozambican economy. The Mozambican CGE models are calibrated on a 2003 Social Accounting Matrix (SAM) and they are performed using GAMS/ MPSGE. Outcomes show that closure rules affect them and each model presents a set of policy prescription to implement the SADC agreement.
APA, Harvard, Vancouver, ISO, and other styles
50

Elias, Rodrigues Yuri. "Unification de l’hétérogénéité expérimentale par un modèle géométrique de la plasticité synaptique." Electronic Thesis or Diss., Université Côte d'Azur, 2021. http://www.theses.fr/2021COAZ6013.

Full text
Abstract:
La façon dont l'apprentissage se produit est une question de longue date à laquelle la recherche en neuroscience tente de répondre. Depuis la première démonstration que la force de câblage des neurones peut être persistante, l'étude des connexions neuronales, les synapses, est devenue une voie de compréhension pour la formation de la mémoire. Dans les années 70, les premières méthodes d'électrophysiologie pour modifier la force synaptique a été découverte, menant à la preuve que les synapses stimulées ont des propriétés plastiques. Une telle forme de plasticité synaptique avait été prédite deux décennies auparavant par une règle synaptique théorique inventée par le neuropsychologue Donald Hebb. La possibilité qu'une règle élaborée puisse expliquer la mémoire a motivé la naissance de théories apportant de nouvelles questions et des représentations mécanistiques du fonctionnement du cerveau.La diversification des techniques expérimentales a permis aux chercheurs d'enquêter en profondeur sur la nature des règles synaptiques. Cependant, l'hétérogénéité des conditions expérimentales adoptées par différents laboratoires impliquait que le même schéma de stimulation pourrait produire différentes modifications synaptiques. L'hétérogénéité observée dans les méthodes et les résultats ont entravé la formalisation d'une vision cohérente du fonctionnement de la plasticité synaptique. Pour combler cette lacune, pendant cette thèse, j’ai développé un modèle stochastique neurocomputationel de la synapse CA3-CA1 glutamatergique de rat pour expliquer et obtenir des informations sur la manière dont les conditions expérimentales affectent les résultats de cette plasticité. J'ai découvert une nouvelle règle de plasticité qui tient compte des différences méthodologiques telles que les aspects développementaux, l’influence du milieu extracellulaire et de la température sur l’issue de la plasticité synaptique. Le modèle repose sur une version étendue des méthodes précédentes pour prédire la plasticité synaptique, modifiée pour gérer la dynamique combinée. Cela est possible en introduisant une lecture géométrique afin d’interpréter la dynamique de deux enzymes de liaison au calcium contrôlant l'induction de la plasticité. De cette façon, le modèle couvre les paradigmes de stimulation classiques et récents (par exemple STDP, FDP, BTSP) en utilisant un seul jeu de paramètres. Enfin, la robustesse du modèle est testée dans un contexte mimant l’irrégularité des décharges de potentiels d’actions in vivo montrant comment différents protocoles convergent vers la même issue de plasticité synaptique lorsque la régularité est altérée. Ce modèle permet d'obtenir des prédictions testables de façon expérimentale car il relie les variables simulées à la spécificité requise pour décrire un protocole de plasticité. Bien que le modèle soit spécifique à la synapse CA3-CA1, les résultats de cette étude peuvent être généralisés à d'autres types de synapses, permettant une meilleure compréhension des règles de la plasticité synaptique et de l'apprentissage
How learning occurs has been a long-standing question in neuroscience. Since the first demonstration that the strength wiring up neurons can be persistent, the study of neuronal connections, the synapses, became a path to understanding memory formation. In the 70s, the first electrophysiology methods to modify the synaptic strength were discovered, leading to the evidence of how synapses subjected to stimulation are plastic. Such a form of synaptic plasticity was predicted two decades before by a theoretical synaptic rule coined by the neuropsychologist Donald Hebb. The possibility that a devised rule could explain memory motivated the birth of theories providing new questions and mechanistic representations of the brain's functioning. The diversification of techniques allowed researchers to investigate in depth the nature of synaptic rules. However, the heterogeneity of experimental conditions adopted by different laboratories implicated that the same stimulation pattern could produce different synaptic modifications. The observed heterogeneity in the methods and outcomes have hindered the formalization of a coherent view on how synaptic plasticity works. To fill this gap, this thesis developed a stochastic computational model of the rat CA3-CA1 glutamatergic synapse to explain and gain insights into how experimental conditions affect plasticity outcomes. I uncovered a new plasticity rule that accounts for methodological differences such as developmental aspects, extracellular medium and temperature influences on synaptic plasticity outcome. The model relies on an expanded version of the previous methods to predict synaptic plasticity, modified to handle combined dynamics. That is achieved by introducing a geometrical readout to interpret the dynamics of two calcium-binding enzymes controlling the induction of plasticity. In this way, the model covers classical and recent stimulation paradigms (e.g. STDP, FDP) using a single rule parameter set. Finally, the model's robustness is tested for in vivo-like spike time irregularity showing how different protocols converge to the same outcome when regularity is altered. This model allows one to obtain testable predictions since it links the simulated variables to the specificity needed to describe a plasticity protocol. Although the model is specific to a single CA3-CA1 synapse, the study's insights may be generalized to other types, enabling a deeper understanding of the rules of synaptic plasticity and learning
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography