To see the other types of publications on this topic, follow the link: Processes and schemes.

Dissertations / Theses on the topic 'Processes and schemes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Processes and schemes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mina, Francesco. "On Markovian approximation schemes of jump processes." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/48049.

Full text
Abstract:
The topic of this thesis is the study of approximation schemes of jump processes whose driving noise is a Levy process. In the first part of our work we study properties of the driving noise. We present a novel approximation method for the density of a Levy process. The scheme makes use of a continuous time Markov chain defined through a careful analysis of the generator. We identify the rate of convergence and carry out a detailed analysis of the error. We also analyse the case of multidimensional Levy processes in the form of subordinate Brownian motion. We provide a weak scheme to approximate the density that does not rely on discretising the Levy measure and results in better convergence rates. The second part of the thesis concerns the analysis of schemes for BSDEs driven by Brownian motion and a Poisson random measure. Such equations appear naturally in hedging problems, stochastic control and they provide a natural probabilistic approach to the solution of certain semi linear PIDEs. While the numerical approximation of the continuous case has been studied in the literature, there has been relatively little progress in the study of such equations with a discontinuous driver. We present a weak Monte Carlo scheme in this setting based on Picard iterations. We discuss its convergence and provide a numerical illustration.
APA, Harvard, Vancouver, ISO, and other styles
2

Nüsken, Nikolas. "Topics in sampling schemes based on Markov processes." Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/63868.

Full text
Abstract:
In this thesis we consider several topics related to the construction of optimal Markovian dynamics in the context of sampling from high-dimensional probability distributions. Firstly, we introduce and analyse Langevin samplers that consist of perturbations of the standard overdamped and underdamped Langevin dynamics. The perturbed dynamics is such that its invariant measure is the same as that of the unperturbed dynamics. We show that appropriate choices of the perturbations can lead to samplers that have improved properties, at least in terms of reducing the asymptotic variance. We present a detailed analysis of the new Langevin samplers for Gaussian target distributions. Our theoretical results are supported by numerical experiments with non-Gaussian target measures. Secondly, we present a general framework for the analysis and development of ensemble based methods, encompassing both diffusion and piecewise deterministic Markov processes. For many performance criteria of interest, including the asymptotic variance, the task of finding efficient couplings can be phrased in terms of problems related to the theory of optimal transportation. We investigate general structural properties, proving a singularity theorem that has both geometric and probabilistic interpretations. Moreover, we show that those problems can often be solved approximately and support our findings with numerical experiments. Addressing the convergence to equilibrium of coupled processes we furthermore derive a modified Poincaré inequality. Finally, under some conditions, we prove exponential ergodicity for the zigzag process using hypocoercivity techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Beaumont, David. "St. Peter's Cathedral, Adelaide : processes provenances and architectural schemes /." Title page, contents and introduction only, 1997. http://web4.library.adelaide.edu.au/theses/09ARCHSB/09archsbb379.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karimi, Pour Fatemeh. "Health-aware predictive control schemes based on industrial processes." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2020. http://hdl.handle.net/10803/673045.

Full text
Abstract:
The research is motivated by real applications, such as pasteurization plant, water networks and autonomous system, which each of them require a specific control system to provide proper management able to take into account their particular features and operating limits in presence of uncertainties related to their operation and failures from component breakdowns. According to that most of the real systems have nonlinear behaviors, it can be approximated them by polytopic linear uncertain models such as Linear Parameter Varying (LPV) and Takagi-Sugeno (TS) models. Therefore, a new economic Model Predictive Control (MPC) approach based on LPV/TS models is proposed and the stability of the proposed approach is certified by using a region constraint on the terminal state. Besides, the MPC-LPV strategy is extended based on the system with varying delays affecting states and inputs. The control approach allows the controller to accommodate the scheduling parameters and delay change. By computing the prediction of the state variables and delay along a prediction time horizon, the system model can be modified according to the evaluation of the estimated state and delay at each time instant. To increase the system reliability, anticipate the appearance of faults and reduce the operational costs, actuator health monitoring should be considered. Regarding several types of system failures, different strategies are studied for obtaining system failures. First, the damage is assessed with the rainflow-counting algorithm that allows estimating the component’s fatigue and control objective is modified by adding an extra criterion that takes into account the accumulated damage. Besides, two different health-aware economic predictive control strategies that aim to minimize the damage of components are presented. Then, economic health-aware MPC controller is developed to compute the components and system reliability in the MPC model using an LPV modeling approach and maximizes the availability of the system by estimating system reliability. Additionally, another improvement considers chance-constraint programming to compute an optimal list replenishment policy based on a desired risk acceptability level, managing to dynamically designate safety stocks in flowbased networks to satisfy non-stationary flow demands. Finally, an innovative health-aware control approach for autonomous racing vehicles to simultaneously control it to the driving limits and to follow the desired path based on maximization of the battery RUL. The proposed approach is formulated as an optimal on-line robust LMI based MPC driven from Lyapunov stability and controller gain synthesis solved by LPV-LQR problem in LMI formulation with integral action for tracking the trajectory.
Esta tesis pretende proporcionar contribuciones teóricas y prácticas sobre seguridad y control de sistemas industriales, especialmente en la forma maten ática de sistemas inciertos. La investigación está motivada por aplicaciones reales, como la planta de pasteurización, las redes de agua y el sistema autónomo, cada uno de los cuales requiere un sistema de control específico para proporcionar una gestión adecuada capaz de tener en cuenta sus características particulares y limites o de operación en presencia de incertidumbres relacionadas con su operación y fallas de averías de componentes. De acuerdo con que la mayoría de los sistemas reales tienen comportamientos no lineales, puede aproximarse a ellos mediante modelos inciertos lineales politopicos como los modelos de Lineal Variación de Parámetros (LPV) y Takagi-Sugeno (TS). Por lo tanto, se propone un nuevo enfoque de Control Predictivo del Modelo (MPC) económico basado en modelos LPV/TS y la estabilidad del enfoque propuesto se certifica mediante el uso de una restricción de región en el estado terminal. Además, la estrategia MPC-LPV se extiende en función del sistema con diferentes demoras que afectan los estados y las entradas. El enfoque de control permite al controlador acomodar los parámetros de programación y retrasar el cambio. Al calcular la predicción de las variables de estado y el retraso a lo largo de un horizonte de tiempo de predicción, el modelo del sistema se puede modificar de acuerdo con la evaluación del estado estimado y el retraso en cada instante de tiempo. Para aumentar la confiabilidad del sistema, anticipar la aparición de fallas y reducir los costos operativos, se debe considerar el monitoreo del estado del actuador. Con respecto a varios tipos de fallas del sistema, se estudian diferentes estrategias para obtener fallas del sistema. Primero, el daño se evalúa con el algoritmo de conteo de flujo de lluvia que permite estimar la fatiga del componente y el objetivo de control se modifica agregando un criterio adicional que tiene en cuenta el daño acumulado. Además, se presentan dos estrategias diferentes de control predictivo económico que tienen en cuenta la salud y tienen como objetivo minimizar el daño de los componentes. Luego, se desarrolla un controlador MPC económico con conciencia de salud para calcular los componentes y la confiabilidad del sistema en el modelo MPC utilizando un enfoque de modelado LPV y maximiza la disponibilidad del sistema mediante la estimación de la confiabilidad del sistema. Además, otra mejora considera la programación de restricción de posibilidades para calcular una política ´optima de reposición de listas basada en un nivel de aceptabilidad de riesgo deseado, logrando designar dinámicamente existencias de seguridad en redes basadas en flujo para satisfacer demandas de flujo no estacionarias. Finalmente, un enfoque innovador de control consciente de la salud para vehículos de carreras autónomos para controlarlo simultáneamente hasta los límites de conducción y seguir el camino deseado basado en la maximización de la bacteria RUL. El diseño del control se divide en dos capas con diferentes escalas de tiempo, planificador de ruta y controlador. El enfoque propuesto está formulado como un MPC robusto en línea optimo basado en LMI impulsado por la estabilidad de Lyapunov y la síntesis de ganancia del controlador resuelta por el problema LPV-LQR en la formulación de LMI con acción integral para el seguimiento de la trayectoria.
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Peter W., and Peter W. Chang. "Analysis of contracting processes, internal controls, and procurement fraud schemes." Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/34642.

Full text
Abstract:
Approved for public release; distribution is unlimited
Approved for public release; distribution is unlimited
Contracting continues to play an important role in the Department of Defense (DoD) as a means to acquire a wide array of systems, supplies, and services. More than half of DoDs budget is spent through contracts. With these large dollars spent comes the possibility of fraud in contracting that can subvert the process causing waste and possibly impeding mission accomplishment. The purpose of this research was to analyze DoDs contracting workforces level of fraud knowledge, according to the six phases of contract management, five internal control components, and six procurement fraud scheme categories. This was done through the deployment of a survey consisting of fraud knowledge and organizational perception questions. The survey was completed by contracting personnel at the U.S. Army Mission and Installation Contracting Command. The results displayed differences in fraud awareness and perception among the different contracting phases, internal control components, and procurement fraud scheme categories. Recommendations for improving fraud awareness were also presented as well as areas for further research.
Contracting continues to play an important role in the Department of Defense (DoD) as a means to acquire a wide array of systems, supplies, and services. More than half of DoDs budget is spent through contracts. With these large dollars spent comes the possibility of fraud in contracting that can subvert the process causing waste and possibly impeding mission accomplishment. The purpose of this research was to analyze DoDs contracting workforces level of fraud knowledge, according to the six phases of contract management, five internal control components, and six procurement fraud scheme categories. This was done through the deployment of a survey consisting of fraud knowledge and organizational perception questions. The survey was completed by contracting personnel at the U.S. Army Mission and Installation Contracting Command. The results displayed differences in fraud awareness and perception among the different contracting phases, internal control components, and procurement fraud scheme categories. Recommendations for improving fraud awareness were also presented as well as areas for further research.
APA, Harvard, Vancouver, ISO, and other styles
6

Krishnamurthy, Shalini B. "Relaxation schemes for multiphase, multicomponent flow in gas injection processes /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Edward, Viktor. "Quantization of stochastic processes with applications on Euler-Maruyama schemes." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-264878.

Full text
Abstract:
This thesis investigates so called quantizations of continuous random variables. A quantization of a continuous random variables is a discrete random variable that approximates the continuous one by having similar properties, often by sharing weak convergence. A measure on how well the quantization approximates the continuous variable is introduced and methods for generating quantizations are developed. The connection between quantization of the normal distribution and the Hermite polynomials is discussed and methods for generating optimal quantizations are suggested. An observed connection between finite differences and quantization is examined and is identified to just be a special case. Finally a method of turning an Euler-Maruyama scheme into a deterministic problem by quantization is presented along with a convergence analysis. The method is reminiscent of standard tree methods, that can be used for e.g. option pricing, but has a few advantages such that there are no requirements on the grid point placements, it can even change for each time step.
APA, Harvard, Vancouver, ISO, and other styles
8

Kant, Latha Arun. "Analysis of cell-loss processes and restoration schemes in ATM networks." Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/282222.

Full text
Abstract:
The success of the emerging ATM networks depends both on switch performance at the cell level and routing strategies at the call level. In this dissertation, we address both issues. At the cell level, we propose a measure that captures cell loss behavior and analyze ATM switch performance by computing the distribution of consecutive cell losses. The extremely low loss probability requirements of an ATM switch preclude the use of simulation, calling for the use of analytic and numerical methods. The latter methods involve the construction and solution of the underlying stochastic processes associated with the switch and workload. Since the detailed stochastic process representations of the above are on the order of tens to hundreds of thousands of states, we use a tool called UltraSAN, which allows for the automatic construction and solution of these detailed stochastic processes. We also compute the distribution of the queue length rather than just the average queue length. At the call level, we propose two restoration schemes and a method to analyze their performance in the case of failures in ATM transport networks. The proposed restoration strategies utilize the existing portions of the network after a link or switch failure rather than relying on redundancy while restoring the affected calls. We also propose an efficient routing scheme for multi-class traffic with widely differing call characteristics. We develop as approach based on Markov decision theory and propose an adaptive band-width protection strategy to prevent any specific application type from monopolizing the link resources.
APA, Harvard, Vancouver, ISO, and other styles
9

Cvetkovic, Nada [Verfasser]. "Convergent discretisation schemes for transition path theory for diffusion processes / Nada Cvetkovic." Berlin : Freie Universität Berlin, 2020. http://d-nb.info/1204924570/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chandramouli, Yegnanarayanan. "Data-analytic and monitoring schemes for a class of discrete point processes." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185347.

Full text
Abstract:
A point process model for the packet stream arising in teletraffic processes is the discrete, non-negative integer-valued, stationary process introduced by Neuts and Pearce. In this thesis, we examine an empirical approach to develop a monitoring scheme for that point process. Monitoring is a procedure of tracking a stochastic process to identify quickly the development of anomalous situations in the evolution of that process and detect their assignable causes. Further, a data-analytic scheme to evaluate the order of a Markov chain that quantifies the local dependence embedded in the point process and Walsh spectral techniques are examined.
APA, Harvard, Vancouver, ISO, and other styles
11

Zebian, Hussam. "Design, integration schemes, and optimization of conventional and pressurized oxy-coal power generation processes." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87986.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 215-222).
Efficient and clean electricity generation is a major challenge for today's world. Multivariable optimization is shown to be essential in unveiling the true potential and the high efficiency of pressurized oxy-coal combustion with carbon capture and sequestration for a zero emissions power plant (Zebian and Mitsos 2011). Besides the increase in efficiency, optimization with realistic operating conditions and specifications also shows a decrease in the capital cost. Elaborating on the concept of increasing the performance of the process and the power generation efficiency, as part of this Ph.D. thesis, new criteria for the optimum operation of regenerative Rankine cycles, are presented; these criteria govern the operation of closed and open feedwater heaters, and are proven (partly analytically and partly numerically) to result in more efficient cycle than the conventional rules of thumb currently practiced in designing and operating Rankine cycles. Simply said, the pressure and mass-flowrate of the bleed streams must be selected in a way to have equal pinch temperatures in the feedwater heaters. The criteria are readily applicable to existing and new power plants, with no associated costs or retrofitting requirements, contributing in significant efficiency increase and major economical and environmental advantages. A case study shows an efficiency increase of 0.4 percentage points without capital cost increase compared to a standard design; such an efficiency increase corresponds to an order of $40 billion in annual savings if applied to all Rankine cycles worldwide. The developed criteria allow for more reliable and trustworthy optimization, thus, four additional aspects of clean power generation from coal are investigated. First, design and optimization of pressurized oxy-coal combustion at the systems-level is performed while utilizing a direct contact separation column (DCSC) instead of a surface heat exchanger for more reliable and durable thermal recovery. Despite the lower effectiveness compared to a surface heat exchanger, optimization employing newly developed optimal operating criteria that govern the DCSC allow for an efficient operation, 3.8 percentage points higher than the basecase operation; the efficiency of the process utilizing a DCSC is smaller than that utilizing a surface heat exchanger but only by 0.32 percentage points after optimization. Optimization also shows a reduction in capital costs by process intensification and by not requiring the first flue gas compressor in the carbon sequestration unit. Second, in order to eliminate performance and economical risks that arise due to uncertainties in the conditions that a power generation process may be subjected to, the designs and operations that allow maximum overall performance of the process while facing all possible changes in operating condition are investigated. Therefore, optimization under uncertainty in coal type, ranging from Venezuelan and Indonesian coals to a lower grade south African Douglas Premium and Kleinkopje coal, and in ambient conditions, up to 10°C difference in the temperature of the cooling water, of the pressurized oxy-coal combustion are performed. Using hierarchic optimization and stochastic programing, the latter shown to be unnecessary, an ideally flexible design is attained, whereby the maximum possible performance of the process with any set of input parameters is attained by a single design. While in general a process designed for a specific coal has a low performance when the utilized coal is changed, for the pressurized oxy-coal combustion process presented herein, it is demonstrated that designing (and optimizing) while taking into consideration the different coal types utilized, results for each coal in performance that is equal to the maximum performance obtained by a design dedicated to that coal. The third aspect considered is flexibility with respect to load variation. Particularly with the increase of the power generation from intermittent renewable energy sources, coal power plants should operate at loads far from nominal, down to 35%. In general this results in efficiency significantly lower than the optimum. Therefore, while keeping the turbine expansion line design fixed to that of the nominal load in order to allow for a full range of thermal load operations, an elaborate study of the variations in thermal load for pressurized oxy-coal combustion is performed. Here too optimization of design and operation taking into consideration that load is not fixed results in a process that is flexible to the thermal load; the range of thermal load considered is 30..100%. The fourth aspect considered is a novel design for heat recovery steam generator (HRSG), which is an essential part of coal power plants, particularly oxy-coal combustion. It is the site of high temperature thermal energy transfer, and is shown to have potential for significant improvements in its design and operation. A new design and operation of the HRSG that allow for simultaneous reduction in the area and the flow losses is proposed: the hot combustion gas is splitted prior to entering the HRSG and prior to dilution with the recycling flue gas to control its temperature as dictated by the HRSG maximum allowed temperature. The main combustion gas flow proceeds to the HRSG inlet and requires smaller amounts of dilution and recycling power requirements compared to the conventional no splitting operation. The splitted fraction is introduced downstream at an intermediate location in the HRSG; the introduction of the splitted gas results in increasing the temperature of the flue gas and the temperature difference between the hot and the cold streams of the HRSG, particularly avoiding small temperature differences which require the most heat transfer area. Results include area reduction by 37% without change in the compensation power requirements, or a decrease in the compensation power requirements by 18% (corresponding to 0.15 percent points of the cycle efficiency) while simultaneously reducing the area by 12%.
by Hussam Zebian.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Carpio, Juan. "Unselfish incentive schemes : a tool to influence peoples' preferences in adoption and diffusion processes." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/91114/.

Full text
Abstract:
It is in the interest of many different types of organisations to encourage the adoption of specific products or desirable behaviours. Such goal has been commonly pursued by offering economic incentives to people with the aim of making the desired action more appealing. This type of strategy is based on standard economic theory, which assumes that people behave in ways that maximise only their own economic benefits. However, behavioural scientists have suggested that people frequently make decisions that go against their own benefit and are affected by emotions, biases and social preferences, all of which may lead to the failure of traditional economic incentives. In the present work, prosocial motives are incorporated into the design of incentive schemes by allowing participants to give away part of their rewards to relevant peers. We tested whether such strategy can outperform the traditional “selfish schemes”. Specifically, four experiments using hypothetical scenarios were performed, in which participants’ preferences were elicited by implementing different methodologies. The main variables considered in this research are the number of recipients and the expectations about their reactions, the possibilities of reciprocity, the certainty of the reward, the size and framing of the reward, and the fear of negative evaluations. The results show a substantial proportion of the participants favouring the “unselfish” incentive schemes. Moreover, the expectations about recipients’ reactions were particularly relevant in defining the effectiveness of programmes that incorporate prosocial motives. Findings also suggest that fulfilling others’ expectations allows people to strengthen their self-concept and maintain a positive self-image. This research brings a new perspective in the study of adoption and diffusion processes by incorporating insights and methods from behavioural science, and it considers the role of contextual factors in decision-making processes that have been neglected in the literature. These results can also contribute to the understanding of the mechanisms driving prosocial behaviours and inform the design of initiatives that aim to encourage desirable actions.
APA, Harvard, Vancouver, ISO, and other styles
13

Rmayti, Mohammad. "Misbehaviors detection schemes in mobile ad hoc networks." Thesis, Troyes, 2016. http://www.theses.fr/2016TROY0029/document.

Full text
Abstract:
Avec l’évolution des besoins d’utilisateurs, plusieurs technologies de réseaux sans fil ont été développées. Parmi ces technologies, nous trouvons les réseaux mobiles ad hoc (MANETs) qui ont été conçus pour assurer la communication dans le cas où le déploiement d’une infrastructure réseaux est coûteux ou inapproprié. Dans ces réseaux, le routage est une fonction primordiale où chaque entité mobile joue le rôle d’un routeur et participe activement dans le routage. Cependant, les protocoles de routage ad hoc tel qu’ils sont conçus manquent de contrôle de sécurité. Sur un chemin emprunté, un nœud malveillant peut violemment perturber le routage en bloquant le trafic. Dans cette thèse, nous proposons une solution de détection des nœuds malveillants dans un réseau MANET basée sur l’analyse comportementale à travers les filtres bayésiens et les chaînes de Markov. L’idée de notre solution est d’évaluer le comportement d’un nœud en fonction de ses échanges avec ses voisins d’une manière complètement décentralisée. Par ailleurs, un modèle stochastique est utilisé afin de prédire la nature de comportement d’un nœud et vérifier sa fiabilité avant d’emprunter un chemin. Notre solution a été validée via de nombreuses simulations sur le simulateur NS-2. Les résultats montrent que la solution proposée permet de détecter avec précision les nœuds malveillants et d’améliorer la qualité de services de réseaux MANETs
With the evolution of user requirements, many network technologies have been developed. Among these technologies, we find mobile ad hoc networks (MANETs) that were designed to ensure communication in situations where the deployment of a network infrastructure is expensive or inappropriate. In this type of networks, routing is an important function where each mobile entity acts as a router and actively participates in routing services. However, routing protocols are not designed with security in mind and often are very vulnerable to node misbehavior. A malicious node included in a route between communicating nodes may severely disrupt the routing services and block the network traffic. In this thesis, we propose a solution for detecting malicious nodes in MANETs through a behavior-based analysis and using Bayesian filters and Markov chains. The core idea of our solution is to evaluate the behavior of a node based on its interaction with its neighbors using a completely decentralized scheme. Moreover, a stochastic model is used to predict the nature of behavior of a node and verify its reliability prior to selecting a path. Our solution has been validated through extensive simulations using the NS-2 simulator. The results show that the proposed solution ensures an accurate detection of malicious nodes and improve the quality of routing services in MANETs
APA, Harvard, Vancouver, ISO, and other styles
14

Schneider, Martin [Verfasser], and Rainer [Akademischer Betreuer] Helmig. "Nonlinear finite volume schemes for complex flow processes and challenging grids / Martin Schneider ; Betreuer: Rainer Helmig." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2019. http://d-nb.info/1188613448/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

El-Fakharany, Mohamed Mostafa Refaat. "Finite Difference Schemes for Option Pricing under Stochastic Volatility and Lévy Processes: Numerical Analysis and Computing." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/53917.

Full text
Abstract:
[EN] In the stock markets, the process of estimating a fair price for a stock, option or commodity is consider the corner stone for this trade. There are several attempts to obtain a suitable mathematical model in order to enhance the estimation process for evaluating the options for short or long periods. The Black-Scholes partial differential equation (PDE) and its analytical solution, 1973, are considered a breakthrough in the mathematical modeling for the stock markets. Because of the ideal assumptions of Black-Scholes several alternatives have been developed to adequate the models to the real markets. Two strategies have been done to capture these behaviors; the first modification is to add jumps into the asset following Lévy processes, leading to a partial integro-differential equation (PIDE); the second is to allow the volatility to evolve stochastically leading to a PDE with two spatial variables. Here in this work, we solve numerically PIDEs for a wide class of Lévy processes using finite difference schemes for European options and also, the associated linear complementarity problem (LCP) for American option. Moreover, the models for options under stochastic volatility incorporated with jump-diffusion are considered. Numerical analysis for the proposed schemes is studied since it is the efficient and practical way to guarantee the convergence and accuracy of numerical solutions. In fact, without numerical analysis, careless computations may waste good mathematical models. This thesis consists of four chapters; the first chapter is an introduction containing historically review for stochastic processes, Black-Scholes equation and preliminaries on numerical analysis. Chapter two is devoted to solve the PIDE for European option under CGMY process. The PIDE for this model is solved numerically using two distinct discretization approximations; the first approximation guarantees unconditionally consistency while the second approximation provides unconditional positivity and stability. In the first approximation, the differential part is approximated using the explicit scheme and the integral part is approximated using the trapezoidal rule. In the second approximation, the differential part is approximated using the Patankar-scheme and the integral part is approximated using the four-point open type formula. Chapter three provides a unified treatment for European and American options under a wide class of Lévy processes as CGMY, Meixner and Generalized Hyperbolic. First, the reaction and convection terms of the differential part of the PIDE are removed using appropriate mathematical transformation. The differential part for European case is explicitly discretized , while the integral part is approximated using Laguerre-Gauss quadrature formula. Numerical properties such as positivity, stability and consistency for this scheme are studied. For the American case, the differential part of the LCP is discretized using a three-time level approximation with the same integration technique. Next, the Projected successive over relaxation and multigrid techniques have been implemented to obtain the numerical solution. Several numerical examples are given including discussion of the errors and computational cost. Finally in Chapter four, the PIDE for European option under Bates model is considered. Bates model combines both stochastic volatility and jump diffusion approaches resulting in a PIDE with a mixed derivative term. Since the presence of cross derivative terms involves the existence of negative coefficient terms in the numerical scheme deteriorating the quality of the numerical solution, the mixed derivative is eliminated using suitable mathematical transformation. The new PIDE is solved numerically and the numerical analysis is provided. Moreover, the LCP for American option under Bates model is studied.
[ES] El proceso de estimación del precio de una acción, opción u otro derivado en los mercados de valores es objeto clave de estudio de las matemáticas financieras. Se pueden encontrar diversas técnicas para obtener un modelo matemático adecuado con el fin de mejorar el proceso de valoración de las opciones para periodos cortos o largos. Históricamente, la ecuación de Black-Scholes (1973) fue un gran avance en la elaboración de modelos matemáticos para los mercados de valores. Es un modelo práctico para estimar el valor razonable de una opción. Sobre unos supuestos determinados, F. Black y M. Scholes obtuvieron una ecuación diferencial parcial lineal y su solución analítica. Desde entonces se han desarrollado modelos más complejos para adecuarse a la realidad de los mercados. Un tipo son los modelos con volatilidad estocástica que vienen descritos por una ecuación en derivadas parciales con dos variables espaciales. Otro enfoque consiste en añadir saltos en el precio del subyacente por medio de modelos de Lévy lo que lleva a resolver una ecuación integro-diferencial parcial (EIDP). En esta memoria se aborda la resolución numérica de una amplia clase de modelos con procesos de Lévy. Se desarrollan esquemas en diferencias finitas para opciones europeas y también para opciones americanas con su problema de complementariedad lineal (PCL) asociado. Además se tratan modelos con volatilidad estocástica incorporando difusión con saltos. Se plantea el análisis numérico ya que es el camino eficiente y práctico para garantizar la convergencia y precisión de las soluciones numéricas. De hecho, la ausencia de análisis numérico debilita un buen modelo matemático. Esta memoria está organizada en cuatro capítulos. El primero es una introducción con un breve repaso de los procesos estocásticos, el modelo de Black-Scholes así como nociones preliminares de análisis numérico. En el segundo capítulo se trata la EIDP para las opciones europeas según el modelo CGMY. Se proponen dos esquemas en diferencias finitas; el primero garantiza consistencia incondicional de la solución mientras que el segundo proporciona estabilidad y positividad incondicionales. Con el primer enfoque, la parte diferencial se discretiza por medio de un esquema explícito y para la parte integral se usa la regla del trapecio. En la segunda aproximación, para la parte diferencial se usa un esquema tipo Patankar y la parte integral se aproxima por medio de la fórmula de tipo abierto con cuatro puntos. En el capítulo tercero se propone un tratamiento unificado para una amplia clase de modelos de opciones en procesos de Lévy como CGMY, Meixner e hiperbólico generalizado. Se eliminan los términos de reacción y convección por medio de un apropiado cambio de variables. Después la parte diferencial se aproxima por un esquema explícito mientras que para la parte integral se usa la fórmula de cuadratura de Laguerre-Gauss. Se analizan positividad, estabilidad y consistencia. Para las opciones americanas, la parte diferencial del LCP se discretiza con tres niveles temporales mediante cuadratura de Laguerre-Gauss para la integración numérica. Finalmente se implementan métodos iterativos de proyección y relajación sucesiva y la técnica de multimalla. Se muestran varios ejemplos incluyendo estudio de errores y coste computacional. El capítulo 4 está dedicado al modelo de Bates que combina los enfoques de volatilidad estocástica y de difusión con saltos derivando en una EIDP con un término con derivadas cruzadas. Ya que la discretización de una derivada cruzada comporta la existencia de coeficientes negativos en el esquema que deterioran la calidad de la solución numérica, se propone un cambio de variables que elimina dicha derivada cruzada. La EIDP transformada se resuelve numéricamente y se muestra el análisis numérico. Por otra parte se estudia el LCP para opciones americanas con el modelo de Bates.
[CAT] El procés d'estimació del preu d'una acció, opció o un altre derivat en els mercats de valors és objecte clau d'estudi de les matemàtiques financeres . Es poden trobar diverses tècniques per a obtindre un model matemàtic adequat a fi de millorar el procés de valoració de les opcions per a períodes curts o llargs. Històricament, l'equació Black-Scholes (1973) va ser un gran avanç en l'elaboració de models matemàtics per als mercats de valors. És un model matemàtic pràctic per a estimar un valor raonable per a una opció. Sobre uns suposats F. Black i M. Scholes van obtindre una equació diferencial parcial lineal amb solució analítica. Des de llavors s'han desenrotllat models més complexos per a adequar-se a la realitat dels mercats. Un tipus és els models amb volatilitat estocástica que ve descrits per una equació en derivades parcials amb dos variables espacials. Un altre enfocament consistix a afegir bots en el preu del subjacent per mitjà de models de Lévy el que porta a resoldre una equació integre-diferencial parcial (EIDP) . En esta memòria s'aborda la resolució numèrica d'una àmplia classe de models baix processos de Lévy. Es desenrotllen esquemes en diferències finites per a opcions europees i també per a opcions americanes amb el seu problema de complementarietat lineal (PCL) associat. A més es tracten models amb volatilitat estocástica incorporant difusió amb bots. Es planteja l'anàlisi numèrica ja que és el camí eficient i pràctic per a garantir la convergència i precisió de les solucions numèriques. De fet, l'absència d'anàlisi numèrica debilita un bon model matemàtic. Esta memòria està organitzada en quatre capítols. El primer és una introducció amb un breu repàs dels processos estocásticos, el model de Black-Scholes així com nocions preliminars d'anàlisi numèrica. En el segon capítol es tracta l'EIDP per a les opcions europees segons el model CGMY. Es proposen dos esquemes en diferències finites; el primer garantix consistència incondicional de la solució mentres que el segon proporciona estabilitat i positivitat incondicionals. Amb el primer enfocament, la part diferencial es discretiza per mitjà d'un esquema explícit i per a la part integral s'empra la regla del trapezi. En la segona aproximació, per a la part diferencial s'usa l'esquema tipus Patankar i la part integral s'aproxima per mitjà de la fórmula de tipus obert amb quatre punts. En el capítol tercer es proposa un tractament unificat per a una àmplia classe de models d'opcions en processos de Lévy com ara CGMY, Meixner i hiperbòlic generalitzat. S'eliminen els termes de reacció i convecció per mitjà d'un apropiat canvi de variables. Després la part diferencial s'aproxima per un esquema explícit mentres que per a la part integral s'usa la fórmula de quadratura de Laguerre-Gauss. S'analitzen positivitat, estabilitat i consistència. Per a les opcions americanes, la part diferencial del LCP es discretiza amb tres nivells temporals amb quadratura de Laguerre-Gauss per a la integració numèrica. Finalment s'implementen mètodes iteratius de projecció i relaxació successiva i la tècnica de multimalla. Es mostren diversos exemples incloent estudi d'errors i cost computacional. El capítol 4 està dedicat al model de Bates que combina els enfocaments de volatilitat estocástica i de difusió amb bots derivant en una EIDP amb un terme amb derivades croades. Ja que la discretización d'una derivada croada comporta l'existència de coeficients negatius en l'esquema que deterioren la qualitat de la solució numèrica, es proposa un canvi de variables que elimina dita derivada croada. La EIDP transformada es resol numèricament i es mostra l'anàlisi numèrica. D'altra banda s'estudia el LCP per a opcions americanes en el model de Bates.
El-Fakharany, MMR. (2015). Finite Difference Schemes for Option Pricing under Stochastic Volatility and Lévy Processes: Numerical Analysis and Computing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/53917
TESIS
APA, Harvard, Vancouver, ISO, and other styles
16

Ozsan, Guney. "Monitoring High Quality Processes: A Study Of Estimation Errors On The Time-between-events Exponentially Weighted Moving Average Schemes." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610031/index.pdf.

Full text
Abstract:
In some production environments the defect rates are considerably low such that measurement of fraction of nonconforming items reaches parts per million level. In such environments, monitoring the number of conforming items between consecutive nonconforming items, namely the time between events (TBE) is often suggested. However, in the design of control charts for TBE monitoring a common practice is the assumptions of known process parameters. Nevertheless, in many applications the true values of the process parameters are not known. Their estimates should be determined from a sample obtained from the process at a time when it is expected to operate in a state of statistical control. Additional variability introduced through sampling may significantly effect the performance of a control chart. In this study, the effect of parameter estimation on the performance of Time Between Events Exponentially Weighted Moving Average (TBE EWMA) schemes is examined. Conditional performance is evaluated to show the effect of estimation. Marginal performance is analyzed in order to make recommendations on sample size requirements. Markov chain approach is used for evaluating the results.
APA, Harvard, Vancouver, ISO, and other styles
17

Cahyono, M. "Three-dimensional numerical modelling of sediment transport processes in non-stratified estuarine and coastal waters." Thesis, University of Bradford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520623.

Full text
Abstract:
Details are given herein of the development, refinement and application of a higher-order accurate 3-D finite difference model for non-cohesive suspended sediment transport processes, in non-stratified estuarine and coastal waters. The velocity fields are computed using a 2-D horizontal depth-integrated model, in combination with either an assumed logarithmic velocity profile or a velocity profile obtained from field data. Also, for convenience in handling variable bed topographies and for better vertical resolution, a δ-stretching co-ordinate system has been used. In order to gain insight into the relative merits of various numerical schemes for modelling the convection of high concentration gradients, in terms of both accuracy and efficiency, thirty six existing finite difference schemes and two splitting techniques have been reviewed and compared by applying them to the following cases: i) 1-D and 2-D pure convection, ii) 1-D and 2-D convection and diffusion, and iii) 1-D non-linear Burger's equation. Modifications to some of the considered schemes have also been proposed, together with two new higher-order accurate finite difference schemes for modelling the convection of high concentration gradients. The schemes were derived using a piecewise cubic interpolation and an universal limiter (proposed scheme 1) or a modified form of the TVD filter (proposed scheme 2). The schemes have been tested for: i) 1-D and 2-D pure convection, and ii) 2-D convection and diffusion problems. The schemes have produced accurate, oscillation-free and non-clipped solutions, comparable with the ULTIMATE fifth- and sixth-order schemes. However, the proposed schemes need only three (proposed scheme 1) or five cell stencils. Hence, they are very attractive and can be easily implemented to solve convection dominated problems for complex bathymetries with flooding and drying. The 3-D sediment transport equation was solved using a splitting technique, with two different techniques being considered. With this technique the 3-D convective-diffusion equation for suspended sediment fluxes was split into consecutive 1-D convection, diffusion and convective-diffusion equations. The modified and proposed higher-order accurate finite difference schemes mentioned above were then used to solve the consecutive 1-D equations. The model has been calibrated and verified by applying it to predict the development of suspended sediment concentration profiles under non-equilibrium conditions in three test flumes. The results of numerical predictions were compared with existing analytical solutions and experimental data. The numerical results were in excellent agreement with the analytical solutions and were in reasonable agreement with the experimental data. Finally, the model has also been applied to predict sediment concentration and velocity profiles in the Humber Estuary, UK. Reasonable agreement was obtained between the model predictions and the corresponding field measurements, particularly when considered in the light of usual sediment transport predictions. The model is therefore thought to be a potentially useful tool for hydraulic engineers involved in practical case studies
APA, Harvard, Vancouver, ISO, and other styles
18

Costa, Antonio Fernando Branco. "Gráficos de controle de Shewhart: Duas década de pesquisa /." Guaratinguetá : [s.n.], 2007. http://hdl.handle.net/11449/116107.

Full text
Abstract:
Resumo: Esta tese apresenta, de forma compacta, os trabalhos mais importantes do autor, que são frutos de uma pesquisa de vinte anos sobre gráficos de Shewhart. O autor estudou os modelos que descrevem o tipo e o instante de ocorrência das causas especiais, propôs novos esquemas de amostragens e estatísticas de monitoramento. Mais recentemente, vem avaliando a capacidade dos gráficos de controle em sinalizar causas especiais quando as observações são autocorrelacionadas e propondo novas estatísticas para o monitoramento de processos mutivariados.
Abstract: In this work, a collection of the most important author's publications is presented. They are the result of two decades of research in the quality control field. The author studied the models that describe the way and the time the process change, proposed new sampling procedures and new statistics for monitoring. More recently, he is studying the performance of the control chart when the data are autocorrelated and, finally, proposing new statistics for monitoring multivariate processes.
APA, Harvard, Vancouver, ISO, and other styles
19

Sader, Bashar Hafez. "Development and Application of a New Modeling Technique for Production Control Schemes in Manufacturing Systems." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd820.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Schröder, Benjamin. "Theoretical high-resolution spectroscopy for reactive molecules in astrochemistry and combustion processes." Doctoral thesis, Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://hdl.handle.net/21.11130/00-1735-0000-0005-12DA-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Costa, Andrea. "Marine connectivity : exploring the role of currents and turbulent processes in driving it." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0091/document.

Full text
Abstract:
La connectivité marine est le transfert de larves et/ou d'individus entre des habitats marins éloignés. Grâce à la connectivité, les populations marines éloignées peuvent faire face à la pression de l'habitat en s'appuyant sur le transfert qui vient des populations éloignées de la même espèce. Le transfert entre les populations éloignées dans l'océan est possible par le transport dû aux courants. Cependant, il est pas encore clair si le champ des courants détermine totalement la persistance des espèces marines ou si la démographie locale joue un rôle. Les mesures in situ de la connectivité sont extrêmement difficiles. Par conséquence, notre connaissance de la connectivité est déduite des simulations numériques de dispersion. Le but de cette thèse est de préciser si la persistance de la connaissance du champ des courants et d’étudier l'effet des paramétrisations numériques dans l'estimation de la connectivité. Premièrement, je compare la théorie des graphes et le modèle de métapopulation pour déterminer si les courants ont un rôle prédominant. Cela permet d'identifier quelles mesures de la théories des graphes identifient de manière fiable les sites reproductifs importants pour la persistance en s'appuyant sur la connaissance des seuls courants. Deuxièmement, j’étudie les avantages et les lacunes de différents schémas de fermeture de turbulence. Ceci permet de préciser quel schéma reproduit mieux l'activité de turbulence dans des modèles numériques. Troisièmement, j'étudie les mécanismes générateurs de turbulence aux limites du fond. Ceci permet de connaître le coefficient de traînée effectif dû aux flux sur la topographie brute et de mieux estimer les flux turbulents
Marine connectivity is the transfer of larvae and/or individuals between distant marine habitats. Thanks to connectivity, distant marine population can face habitat pressure by relying on the transfer from distant populations of the same species. The transfer between distant populations in the ocean is made possible by the transport due to the currents. However, it is still not clear if the current field totally determines the persistence of the marine species or if the local demography plays a role. Crucially, in situ measurements of connectivity are extremely difficult. Therefore, our knowledge about connectivity is inferred from numerical dispersal simulations. The aim of this thesis is to clarify if we can deduce the persistence from the knowledge of the current field and to investigate the effect of numerical turbulence parameterizations in estimating connectivity. Firstly, I compare graph theory and metapopulation model to determine if currents have a predominant role. This allows to identify which graph theory measures reliably identifies reproductive sites important for persistence by relying on the knowledge of currents only. Secondly, I investigate the advantages and shortcomings of different turbulence closure models. This allows to clarify which TCS better reproduces turbulence activity in numerical models. Thirdly, I investigate generating mechanisms of bottom boundary turbulence. This allows to know the effective drag coefficient due to flow over rough topography and better estimate turbulent fluxes
APA, Harvard, Vancouver, ISO, and other styles
22

Carlan, Eliana. "Sistemas de Organização do Conhecimento: uma reflexão no contexto da Ciência da Informação." Thesis, reponame:Repositório Institucional da UnB, 2010. http://eprints.rclis.org/14519/1/Carlan-Eliana-Dissertacao.pdf.

Full text
Abstract:
This research studies the knowledge organization systems (KOS) related to theories to build thesaurus, taxonomies, ontologies and classification systems in the literature field of Information Science. It uses the methodology of literature review and a research on the same field databases in order to investigate the bibliographic production about the theme, from 1998 up to July 2009. A bibliographic research about knowledge organization and representation is carried out, specifically related to the development of thesaurus, taxonomies, ontologies and classification systems. It identifies the same theoretical way to build KOS through the classification theory, concept theory, the relationship between the concepts and the foundation of Linguistics and Terminology. Extrinsic and intrinsic characteristics were analysed from the representative sample of the bibliographic production about KOS. The extrinsic analysis is relative to form aspects, including the publication year, authors, title, publication and keywords. The intrinsic analysis relates to content aspects through the subject analysis of the documents following the theoretical foundations. The last chapter verifies that the thesaurus and classification systems are the most quoted in the literature about KOS, being a theoretical reference to the development of these systems based on the international standards and rules. It highlights the importance of consolidating common standards to build different types of KOS in the field of Information Science and shows the need of gathering the multidisciplinary interests linked by the same goals and also getting better practices in the knowledge organization and representation.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Yiping. "Numerical modelling of solute transport processes using higher order accurate finite difference schemes : numerical treatment of flooding and drying in tidal flow simulations and higher order accurate finite difference modelling of the advection diffusion equation for solute transport predictions." Thesis, University of Bradford, 1992. http://hdl.handle.net/10454/4344.

Full text
Abstract:
The modelling of the processes of advection and dispersion-diffusion is the most crucial factor in solute transport simulations. It is generally appreciated that the first order upwind difference scheme gives rise to excessive numerical diffusion, whereas the conventional second order central difference scheme exhibits severe oscillations for advection dominated transport, especially in regions of high solute gradients or discontinuities. Higher order schemes have therefore become increasingly used for improved accuracy and for reducing grid scale oscillations. Two such schemes are the QUICK (Quadratic Upwind Interpolation for Convective Kinematics) and TOASOD (Third Order Advection Second Order Diffusion) schemes, which are similar in formulation but different in accuracy, with the two schemes being second and third order accurate in space respectively for finite difference models. These two schemes can be written in various finite difference forms for transient solute transport models, with the different representations having different numerical properties and computational efficiencies. Although these two schemes are advectively (or convectively) stable, it has been shown that the originally proposed explicit QUICK and TOASOD schemes become numerically unstable for the case of pure advection. The stability constraints have been established for each scheme representation based upon the von Neumann stability analysis. All the derived schemes have been tested for various initial solute distributions and for a number of continuous discharge cases, with both constant and time varying velocity fields. The 1-D QUICKEST (QUICK with Estimated Streaming Term) scheme is third order accurate both in time and space. It has been shown analytically and numerically that a previously derived quasi 2-D explicit QUICKEST scheme, with a reduced accuracy in time, is unstable for the case of pure advection. The modified 2-D explicit QUICKEST, ADI-TOASOD and ADI-QUICK schemes have been developed herein and proved to be numerically stable, with the bility sta- region of each derived 2-D scheme having also been established. All these derived 2-D schemesh ave been tested in a 2-D domain for various initial solute distributions with both uniform and rotational flow fields. They were further tested for a number of 2-D continuous discharge cases, with the corresponding exact solutions having also been derived herein. All the numerical tests in both the 1-D and 2-D cases were compared with the corresponding exact solutions and the results obtained using various other difference schemes, with the higher order schemes generally producing more accurate predictions, except for the characteristic based schemes which failed to conserve mass for the 2-D rotational flow tests. The ADI-TOASOD scheme has also been applied to two water quality studies in the U. K., simulating nitrate and faecal coliform distributions respectively, with the results showing a marked improvement in comparison with the results obtained by the second order central difference scheme. Details are also given of a refined numerical representation of flooding and drying of tidal flood plains for hydrodynamic modelling, with the results showing considerable improvements in comparison with a number of existing models and in good agreement with the field measured data in a natural harbour study.
APA, Harvard, Vancouver, ISO, and other styles
24

Betton, Clélia. "Nouvelle stratégie d'extraction et de purification de l'hydrazine N2H4 de grade spatial via le procédé Raschig : synthèse, modélisations cinétiques, équilibres entre phases et schémas de procédé." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10366/document.

Full text
Abstract:
Ce travail, effectué dans le cadre d’un financement CIBLE-Région Rhône-Alpes, en partenariat avec le groupe HERAKLES-SAFRAN, a pour objectif la mise au point d’un nouveau procédé d’extraction et de purification de l’hydrazine N2H4, pour des applications spatiales. Ce monergol se doit d'être de très haute pureté, avec une composition massique supérieure à 99,5 % en hydrazine et un taux en carbone inférieur à 30 ppm. La première partie de cette étude nous a permis d’identifier les réactions de formation et de dégradation en vue d’établir un modèle cinétique global et de déterminer les paramètres directeurs de la synthèse. La connaissance des compositions des liqueurs réactionnelles fonction des conditions opératoires de la synthèse, nous a permis de positionner, au sortir des réacteurs, le point de mélange global dans les diagrammes de phases ternaires et quaternaires en vue de définir les conditions optimales d’extraction. La seconde partie concerne l'étude thermodynamique détaillée de la nouvelle voie d'extraction, qui consiste, non pas à extraire l’ammoniac en excès du milieu réactionnel mais à le maintenir, in situ, pour extraire l’hydrazine lors de la démixtion liquide-liquide par effet de solvant. L’objectif, in fine, est d’obtenir une phase ammoniacale quasi-anhydre, de manière à éliminer les multiples distillations et opérations complexes d’extraction et de purification rencontrées dans le procédé traditionnel. Cette nouvelle stratégie repose sur l’existence d’une lacune de miscibilité à l’état liquide, dans les systèmes ternaire H2O-NH3-NaOH et quaternaire H2O-N2H4-NH3-NaOH sous une pression comprise entre 15 et 20 bar. La dernière partie aborde le volet génie des procédés. L’exploitation pertinente du modèle cinétique et des diagrammes de phases impliqués nous a permis de déterminer les conditions optimales de synthèse et d’isolement, de calculer la composition des flux de matière au sortir de chaque opération unitaire et de les comparer aux procédés industriels antérieurs. Les schémas de procédé correspondant à chaque option ont ainsi été établis et analysés au niveau coût, sécurité et spécifications du produit utile obtenu
This work, funded by the CIBLE-Rhône-Alpes Region, in partnership with the HERAKLES-SAFRAN group, aims to develop a new method for extracting and purifying hydrazine N2H4, for space applications. This monopropellant must be of very high purity, with an upper mass composition of 99.5% hydrazine and a carbon content of less than 30 ppm.The first part of this study allowed us to identify the reactions of formation and degradation to establish a global kinetic model and determine the guiding parameters of synthesis. Knowledge of the compositions of reaction liquors function of operating conditions of the synthesis, has allowed us to position, on leaving the reactor, the overall mixing point in the diagrams of ternary and quaternary phases in order to define the optimum extraction conditions.The second part concerns the detailed thermodynamic study of the new method of extraction, which is, not to extract excess ammonia from the reaction mixture but maintain the in situ to extract the hydrazine in the liquid phase separation -liquid by solvent effect. The aim ultimately is to get a virtually anhydrous ammonia phase so as to eliminate multiple distillations and complex extraction and purification encountered in the traditional process. This new strategy is based on the existence of a miscibility gap in the liquid state, in the H2O-NH3-H2O-NaOH ternary system and N2H4-NH3-NaOH quaternary system at a pressure of between 15 and 20 bar.The last part deals with the process engineering component. The operation of kinetic model and phase diagrams involved allowed us to determine the optimum conditions of synthesis and isolation, to calculate the composition of the material flow at the end of each unit operation and compare them with previous industrial processes . The process diagrams for each option have been prepared and analyzed at cost, safety and specifications obtained useful product
APA, Harvard, Vancouver, ISO, and other styles
25

Ana, Firanj. "Modeliranje turbulentnog transporta ugljen-dioksida i azotnih oksida u površinskom sloju atmosfere iznad ruralne oblasti." Phd thesis, Univerzitet u Novom Sadu, Asocijacija centara za interdisciplinarne i multidisciplinarne studije i istraživanja, 2015. http://www.cris.uns.ac.rs/record.jsf?recordId=92472&source=NDLTD&language=en.

Full text
Abstract:
U okviru doktorske disertacije predstavljeni su postojeći i novi koncepti modeliranjaturbulentnog transporta ugljen-dioksida i azotnih oksida u površinskom sloju atmosfereiznad ruralne oblasti. Cilj istraživanja je da se na osnovu postojećih saznanja o procesimakoji opisuju interakciju tlo-vegetacija-atmosfera i rezultata mikrometeorološkiheksperimenata unapredi modeliranje procesa interakcije. Poseban akcenat stavljen je namodeliranje turbulentnog transporta gasova iznad i unutar šumskog sklopa. Uticajvertikalne heterogenosti biljnog sklopa uveden je u predloženi postupak skaliranjaasimilacije ugljen-dioksida sa lista na biljni sklop i suve depozicije azotnih oksida.Predloženi koncepti testirani su u okviru fizičke LAPS i hemijske MLC-Chem površinskesheme. Za potrebe modeliranja turbulentnog transporta ugljen-dioksida razvijen jemodul za parametrizaciju intenziteta fotosinteze. Kvantitativna analiza rezultata izvedenaje poređenjem osmotrenih i simuliranih vrednosti turbulentnih flukseva ugljen-dioksida iazotnih oksida na četiri karakteristična šumska sklopa. Unapređenje modeliranjaizvedeno je kaplovanjem testiranih površinskih shema u MLC-LAPS shemu. Kvalitetsimulacija MLC-LAPS sheme proveren je poređenjem izlaznih i osmotrenihmikrometeoroloških veličina, koncentracije i turbulentnih flukseva gasova.
This PhD thesis deals with the current and new concepts of modeling turbulenttransport of carbon dioxide and nitrogen oxides in the surface layer of the atmosphereabove the rural areas. The aim of this research is to improve modeling of the interactionbetween soil-vegetation-atmosphere based on existing knowledge about the processesdescribing the interaction and results of micrometeorological experiments. Specialemphasis is placed on the modeling of turbulent transport of gases above and withinthe forest canopy. Influence of vertical canopy heterogeneity was introduced in theproposed method for scaling the assimilation of carbon dioxide from the leaf to canopy level and dry deposition of nitrogen oxides. The presented concepts are tested within the physical LAPS and chemical MLC-Chem surface schemes. For the purposes of modeling the turbulent transport of carbon dioxide the module for parameterization of photosynthesis was developed. Quantitative analysis of the results were made by comparing the observed and simulated values of turbulent fluxes of carbon dioxide and nitrogen oxides in four distinctive forest canopies. Modeling improvement was performed by coupling tested surface schemes into MLC-LAPS scheme. Quality of MLCLAPS scheme simulations is verified by comparing the output and observed micrometeorological elements and turbulent fluxes of energy and gases.
APA, Harvard, Vancouver, ISO, and other styles
26

Lair, William. "Modélisation dynamique de systèmes complexes pour le calcul de grandeurs fiabilistes et l’optimisation de la maintenance." Thesis, Pau, 2011. http://www.theses.fr/2011PAUU3013.

Full text
Abstract:
L’objectif de cette thèse est de proposer une méthode permettant d’optimiser la stratégie de maintenance d’un système multi-composants. Cette nouvelle stratégie doit être adaptée aux conditions d’utilisation et aux contraintes budgétaires et sécuritaires. Le vieillissement des composants et la complexité des stratégies de maintenance étudiées nous obligent à avoir recours à de nouveaux modèles probabilistes afin de répondre à la problématique. Nous utilisons un processus stochastique issu de la Fiabilité Dynamique nommé processus markovien déterministe par morceaux (Piecewise Deterministic Markov Process ou PDMP). L’évaluation des quantités d’intérêt (fiabilité, nombre moyen de pannes...) est ici réalisé à l’aide d’un algorithme déterministe de type volumes finis. L’utilisation de ce type d’algorithme, dans ce cadre d’application, présente des difficultés informatiques dues à la place mémoire. Nous proposons plusieurs méthodes pour repousser ces difficultés. L’optimisation d’un plan de maintenance est ensuite effectuée à l’aide d’un algorithme de recuit simulé. Cette méthodologie a été adaptée à deux systèmes ferroviaires utilisés par la SNCF, l’un issu de l’infrastructure, l’autre du matériel roulant
The aim of this work is to propose a methodology to optimize a multi-components system maintenance. This new maintenance strategy must be adapted to budget and safety constraints and operating conditions. The aging of components and the complexity of studied maintenance strategies require us to use new probabilistic models in order to address the problem. A stochastic process from Dynamic Reliability calculations are here established by using a deterministic algorithm method based on a finite volume scheme. Using this type of algorithm in this context of application presents difficulties due to computer memory space. We propose several methods to counter these difficulties. The optimization of a maintenance plan is then performed using simulated annealing algorithm. This methodology was used to optimize the maintenance of two rail systems used by the French national railway company (SNCF)
APA, Harvard, Vancouver, ISO, and other styles
27

Gok, Ali Can. "Associated Factors Of Psychological Well-being: Early Maladaptive Schemas, Schema Coping Processes, And Parenting Styles." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614645/index.pdf.

Full text
Abstract:
The present study aimed (1) to examine possible influence of demographic variables of age, gender, familial monthly income, relationship status, mother&rsquo
s education, father&rsquo
s education on Parenting Styles, Schema Domains, Schema Coping Styles, and Psychopathology/Life Satisfaction
(2) to examine associated factors of Schema Domains, Schema Coping Styles, Psychopathology/Life Satisfaction
(3) to examine the mediator role of Schema Domains in the relationship between Parenting Styles and Psychopathology/Life Satisfaction
(4) to examine the mediator role of Schema Coping Styles in the relationship between Schema Domains and Psychopathology/Life Satisfaction. In order to fulfill these aims 404 people between the ages 18-42 participated in the study. According to results, negative parenting practices from both sources (i.e., mother, father) were found to be associated with stronger levels of schema domains. Furthermore, Impaired Limits/Exaggerated Standards and Impaired Autonomy/Other Directedness schema domains were found to be associated with Compensation schema coping style
while Disconnection/Rejection and Impaired Limits/Exaggerated Standards schema domains were found related to Avoidance schema coping style. After that, mother&rsquo
s parenting style, schema domains of Disconnection/Rejection, and Impaired Autonomy/Other Directedness were found to be significantly associated with depressive symptomatology. In addition, psychopathological symptoms were found to be associated with both parenting styles, schema domains of Disconnection/Rejection and Impaired Limits/Exaggerated Standards, and schema coping style of Avoidance. What is more, both parenting styles, schema domain of Disconnection/Rejection, were negatively
and compensation schema coping style was positively associated with satisfaction with life. As for the mediational analyses, schema domains mediated the relationship between parenting styles and psychopathology/life satisfaction
furthermore, schema coping styles mediated the relationship between schema domains and psychopathology/life satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
28

Moreira, Walter. "A construção de informações documentárias: aportes da linguística documentária, da terminologia e das ontologias." Thesis, Universidade de São Paulo, 2010. http://eprints.rclis.org/17437/1/TeseFinalRevisada_05Jul2010.pdf.

Full text
Abstract:
Investigates theoretical and practical interfaces between terminology, philosophical ontology, computational ontology and documentary linguistics and the subsidies that they offer for the construction of documentary information. It was established as specific objectives, the analysis of the production, development, implementation and use of ontologies based on the information science theories, the research on the contribution of ontologies for the development of thesauri and vice versa and the discussion of the philosophical foundation of the application of ontologies based on the study of ontological categories present in classical philosophy and in the contemporary proposals. It argues that the understanding of ontologies through the communicative theory of terminology contributes to the organization of a less quantitative access (syntactic) and more qualitative (semantic) of information. Notes that, in spite of sharing some common goals, there is little dialogue between the information science (and, inside it, the documentary linguistics) and computer science. It argues that the computational and philosophical ontologies are not completely independent events, which have among themselves only the similarity of name, and notes that the discussion of categories and categorization in computer science, does not always have the emphasis it receives in information science in studies on knowledge representation. The approach of Deleuze and Guattari's rhizome, was treated as instigator of reflections on the validity of the hierarchical tree model structure and the possibilities of its expansion. It concludes that the construction of ontologies can not ignore the terminological and conceptual analysis, as it's understood by the terminology and by the information science accumulated in the theoretical and methodological basis for the construction of indexing languages and, on the other hand, the construction of flexible indexing languages can not ignore the representational model of ontologies which are more capable for formalization and interoperability.
APA, Harvard, Vancouver, ISO, and other styles
29

Gannavarapu, Chandrasekhar. "Economic assessment on the synthesis of optimising control schemes." Phd thesis, Department of Chemical Engineering, 1991. http://hdl.handle.net/2123/5995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Garge, Swapnil. "Development of an inference based control scheme for reactive extrusion processes." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 236 p, 2007. http://proquest.umi.com/pqdweb?did=1362532031&sid=11&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Motaabbed, Asghar B. 1959. "A knowledge acquisition scheme for fault diagnosis in complex manufacturing processes." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278266.

Full text
Abstract:
This thesis introduces the problem of knowledge acquisition in developing a Trouble Shooting Guide (TSG) for equipment used in integrated circuit manufacturing. TSG is considered as a first step in developing an Expert Diagnostic System (EDS). The research is focused on the acquisition and refinement of actual knowledge from the manufacturing domain, and a Hierarchical Data Collection (HDC) system is introduced to solve the problem of bottleneck in developing EDS. An integrated circuit manufacturing environment is introduced, and issues relating to the collection and assessment of knowledge concerning the performance of the machine park are discussed. Raw data about equipment used in manufacturing environment is studied and results are discussed. A systematic classification of symptoms, failures, and repair activities is presented.
APA, Harvard, Vancouver, ISO, and other styles
32

Nirmala, Maria Christine. "A Study Of Organizational Rightsizing : Actors, Processes And Outcome." Thesis, Indian Institute of Science, 2006. http://hdl.handle.net/2005/286.

Full text
Abstract:
The pressure for economic integration has been reinforced by developments in technology, changes in market structures and the emergence of transnational corporations. Rightsizing has emerged as a critical process in this present era of shrinking space, shrinking time and disappearing borders in the context of employee engagement and human capital. It is often adopted by most organizations to help them become more agile and flexible and thereby cater to the competitive demands. The diverse impacts of rightsizing on various actors however question the justice aspect of the entire process. This study addresses rightsizing from the perspective of social justice by taking into consideration the assessments of the processes by the affected actors namely, the implementers who drive the rightsizing processes; the separated who leave the organization as a result of rightsizing; and the stayers who remain in the organization and have observed the process. It also aims at understanding the various rightsizing processes from an empirical perspective and examines the causal relatedness of the rightsizing processes and outcome across some of the Indian organizations and the actors. Review of literature: The gamut of literature in rightsizing has provided a strong foundation for the researcher to gain a critical understanding of the various processes underlying rightsizing. The key challenge in rightsizing concerns the fairness aspect of the entire process considering the fact that in most cases rightsizing results in gains for some people and loss for others. Given that judgments of fairness are highly subjective, the lack of an absolute standard for determining fairness in this situation has been identified as a gap. As many studies highlight the ambivalence in results with regard to the outcome of rightsizing and attribute them to the rightsizing processes, the relationship of the rightsizing processes and the outcome has emerged as an area of interest. Though there have been correlation based analysis between various rightsizing variables, causal models that link the rightsizing processes to the outcome have been found missing. The dearth of studies from the Indian set up have also prompted the need to build segregate and aggregate causal models of rightsizing processes and outcome at the organization and actor levels. Aim, objectives and methodology: The aim of this study has been to identify the rightsizing processes that contribute towards positive outcome for both the organization and the individuals concerned from the social justice perspective. The objectives were: 1. To compare and contrast the implementation of rightsizing processes in some of the Indian organizations. 2. To develop a framework for understanding and classifying rightsizing processes in relation to the social justice perspective. 3. To identify the effective rightsizing processes that contribute significantly towards minimizing individual stress and maximizing commitment towards the organization. 4. To outline appropriate guidelines based on the justice perspectives of the actors for better implementation of rightsizing in organizations. The conceptual model links the actors, their assessments of the rightsizing processes and the outcome of the entire process as affecting their individual stress and commitment towards the organization. The just processes of rightsizing have been decided based on the assessment of actors and on the extent of their agreement with one another on implementation of the discrete rightsizing practices. Accordingly those practices that all the three groups of actors, namely the implementers, stayers and separated perceive to have been implemented will be classified as the "best practices" or system 4 practices; the practices that have been perceived to have been implemented by the implementers and stayers but not the separated will be classified as the "better practices" or system 3 practices; those practices that the implementers and separated perceive as implemented will be the "ineffective practices" or system 2 practices; and the practices where all the three groups differ with one another with regard to the extent of implementation will be termed the "poor practices" or system 1 practices. The questionnaire was finalized after a preliminary and pilot study. Data was collected from 727 respondents across four organizations, one private manufacturing unit referred to as Org-1, one state public sector unit referred to as Org-2, two central public sector units referred to as Org-3 and Org-4. The total sample consisted of 137 implementers, 320 stayers and 270 separated. Results and discussion: The first part of the analysis focused on validating the rightsizing processes through factor analysis and also testing the reliability using Chronbach alpha. The implementation of the rightsizing processes across the four organizations was compared using Bonferroni post hoc comparisons. Org-1 and Org-4 had implemented most of the rightsizing practices adequately. The perceptions of the employees of Org-2 and Org-3 were found to be significantly inadequate when compared to Org-1 and Org-4 with respect to many of the practices. The second set of analysis compares the assessments of the actors with regard to the implementation of the various rightsizing practices, and classifies them into one of the four systems based on the framework developed. The system 4 practices consist of, the notification period; the severance package; the amount of money that the organizations wished to save after rightsizing and avoidance of ineffective cost reduction strategies. The outcome of rightsizing with respect to role clarity and role sufficiency also falls into system 4. The system 3 practices consist of understanding the need for rightsizing; the need for manpower reduction, proactive cost reduction strategies, separation of the sick and criteria for separation of the redundant. System 1 practices comprise of internal stakeholders, alternate strategies adopted by the organization before resorting to separation of the employees, preparation and communication, leadership, review and control and assistance provided to the separated. The outcome with regard to job security and commitment also falls in this category. The final set of analysis aims at identifying those processes that contribute significantly towards the outcome at both the organizational level and from the perceptions of the actors through path analysis. The path analysis was conducted at the segregate and aggregate levels for the organizations and the actors. Initially a full segregate model where all the independent variables are linked to the dependent variables was fit for the 4 organizations and for the 3 categories of actors. Those processes that contributed significantly towards the outcome with respect to the actors and the organizations were structured onto two final aggregate models. The validity of these aggregate models was examined for the organizations and actors respectively. Conclusion: This study provides a deeper understanding of the various processes underlying rightsizing in the three different stages of implementation. These validated measures can be used as a template by the organizations to study and guide further rightsizing initiatives. Through this research three groups of individuals diversely affected by rightsizing have been brought together under one common framework which is a methodological innovation. Inspite of having different interests, it is possible to obtain a consensus in their assessments of some of the rightsizing practices. This is an important conclusion that can be drawn in support of the social justice perspective with regard to rightsizing. The relationship between the rightsizing processes as affecting the outcome of stress and commitment can also be understood from a causal perspective across organizations and actors through segregate and aggregate models. The best practices with knowledge capital and social capital can also be included in understanding the perspectives of the actors and classification of rightsizing best practices in future work.
APA, Harvard, Vancouver, ISO, and other styles
33

Vasconcelos, Maria Jose. "Modeling spatial dynamic ecological processes with DEVS-Scheme and geographic information systems." Diss., The University of Arizona, 1993. http://hdl.handle.net/10150/186257.

Full text
Abstract:
The objective of this work is to introduce and illustrate the potential of discrete event, hierarchical modular models for simulating spatial dynamic ecological processes in geographic information systems (GIS). The knowledge based discrete-event simulation environment (DEVS-Scheme) associates stand-alone discrete event models with spatial locations represented in a GIS data base, and couples those models in a coherent manner. The dynamic models can then process spatially distributed information available in a GIS data base, and update it through time. The models also can receive external updated information at any moment, due to the continuous time nature of discrete event specifications. The proposed approach facilitates the representation of reality at several levels of resolution, with model components organized in a hierarchical structure and information flow implemented in the form of message passing. These capabilities are illustrated with two applications. The first is a multi-scale spatial succession model of a wet sclerophyllous forest subject to recurrent fires, and the second is a fire growth model.
APA, Harvard, Vancouver, ISO, and other styles
34

Sun-Ongerth, Yuelu. "Exploring Novice Teachers' Cognitive Processes Using Digital Video Technology: A Qualitative Case Study." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/msit_diss/108.

Full text
Abstract:
This dissertation describes a qualitative case study that investigated novice teachers’ video-aided reflection on their own teaching. To date, most studies that have investigated novice teachers’ video-aided reflective practice have focused on examining novice teachers’ levels of reflective writing rather than the cognitive processes involved during their reflection. Few studies have probed how novice teachers schematize and theorize their newly acquired and/or existing knowledge during video-aided reflection. The purpose of this study was to explore novice teachers’ cognitive processes, particularly video-aided schematization and theorization (VAST), which is a set of cognitive processes that help novice teachers construct, restructure and reconstruct their professional knowledge and pedagogical thinking while reflecting on videos of their own teaching. The researcher measured novice teachers’ VAST by examining their schema construction and automation in terms of schema accretion, schema tuning and schema restructuring. The study attempted to answer the following questions: a) What is the focus of novice teachers’ video-aided reflection? and b) How do novice teachers connect the focus of their reflections to their prior knowledge and future actions? The findings indicate that video-aided reflection could help novice teachers (1) notice what was needed to improve in their teaching practice, (2) realize how various elements in teaching were interrelated, and (3) construct, restructure, or reconstruct their professional knowledge – in other words, develop their schemata about teaching and learning through VAST. With a more developed and mature schemata, novice teachers could be able to better understand the various elements involved in teaching and learning, and handle the situations they encounter in their teaching. This may be because people’s schemata can provide the link between concepts and patterns of what they do (Rumelhart, 1980). This research has provided a new way to look at novice teachers’ video-aided reflection: how the cognitive processes they experience during their reflection can help them develop the knowledge about teaching and learning, and how their cognitive development can help them grow toward becoming teaching experts. The research findings add to the knowledge base about the use of video technology in teachers’ self-reflection and professional development in teacher education.
APA, Harvard, Vancouver, ISO, and other styles
35

Rohani, Jafri Mohd. "The development and analysis of quality control adjustment schemes for process regulation." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179951651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Menozzi, Stephane. "Discretisations associees a un processus dans un domaine et schemas numeriques probabilistes pour les EDP paraboliques quasilineaires." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00008769.

Full text
Abstract:
Les travaux effectués dans ma thèse portent sur la discrétisation de processus dans un domaine et sur les méthodes numériques probabilistes pour les EDP paraboliques quasilinéaires. En ce qui concerne le premier sujet, nous avons d'abord montré un résultat d'encadrement de l'erreur faible associée à un processus de diffusion hypoelliptique tué approché par son schéma d'Euler tué à temps discret, cf. Chapitre 1. Ensuite, dans le cadre non markovien des processus d'Itô, nous avons obtenu une borne pour l'erreur faible associée à la discrétisation du temps de sortie à l'aide de techniques originales de martingales, cf. Chapitre 2. Nous avons enfin, dans le cas particulier du mouvement Brownien dans un orthant, obtenu un développement de l'erreur et une méthode d'accélération de la convergence basée sur une correction adéquate du domaine, cf. Chapitre 3. Par rapport au deuxième sujet, nous avons proposé un algorithme probabiliste simple à implémenter pour approcher la solution d'EDP paraboliques quasilinéaires et nous avons établi sa vitesse de convergence. Cette méthode consiste à discrétiser l'équation différentielle stochastique progressive rétrograde (EDSPR) qui permet de donner une représentation probabiliste de l'EDP, cf. Chapitre 4.
APA, Harvard, Vancouver, ISO, and other styles
37

Yildiz, Erenus [Verfasser]. "An Intelligent Visual Analysis Scheme for Automatic Disassembly Processes in the Recycling Industry / Erenus Yildiz." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1233008978/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Quinton, Jean-Charles Buisson Jean-Christophe. "Coordination implicite d'interactions sensorimotrices comme fondement de la cognition." Toulouse : INP Toulouse, 2009. http://ethesis.inp-toulouse.fr/archive/00000697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Albosaily, Sahar. "Stratégies optimales d'investissement et de consommation pour des marchés financiers de type"spread"." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR099/document.

Full text
Abstract:
Dans cette thèse, on étudie le problème de la consommation et de l’investissement pour le marché financier de "spread" (différence entre deux actifs) défini par le processus Ornstein-Uhlenbeck (OU). Ce manuscrit se compose de sept chapitres. Le chapitre 1 présente une revue générale de la littérature et un bref résumé des principaux résultats obtenus dans cetravail où différentes fonctions d’utilité sont considérées. Dans le chapitre 2, on étudie la stratégie optimale de consommation / investissement pour les fonctions puissances d’utilité pour un intervalle de temps réduit a 0 < t < T < T0. Dans ce chapitre, nous étudions l’équation de Hamilton–Jacobi–Bellman (HJB) par la méthode de Feynman - Kac (FK). L’approximation numérique de la solution de l’équation de HJB est étudiée et le taux de convergence est établi. Il s’avère que dans ce cas, le taux de convergencedu schéma numérique est super–géométrique, c’est-à-dire plus rapide que tous ceux géométriques. Les principaux théorèmes sont énoncés et des preuves de l’existence et de l’unicité de la solution sont données. Un théorème de vérification spécial pour ce cas des fonctions puissances est montré. Le chapitre 3 étend notre approche au chapitre précédent à la stratégie de consommation/investissement optimale pour tout intervalle de temps pour les fonctions puissances d’utilité où l’exposant γ doit être inférieur à 1/4. Dans le chapitre 4, on résout le problème optimal de consommation/investissement pour les fonctions logarithmiques d’utilité dans le cadre du processus OU multidimensionnel en se basant sur la méthode de programmation dynamique stochastique. En outre, on montre un théorème de vérification spécial pour ce cas. Le théorème d’existence et d’unicité pour la solution classique de l’équation de HJB sous forme explicite est également démontré. En conséquence, les stratégies financières optimales sont construites. Quelques exemples sont donnés pour les cas scalaires et pour les cas multivariés à volatilité diagonale. Le modèle de volatilité stochastique est considéré dans le chapitre 5 comme une extension du chapitre précédent des fonctions logarithmiques d’utilité. Le chapitre 6 propose des résultats et des théorèmes auxiliaires nécessaires au travail.Le chapitre 7 fournit des simulations numériques pour les fonctions puissances et logarithmiques d’utilité. La valeur du point fixe h de l’application de FK pour les fonctions puissances d’utilité est présentée. Nous comparons les stratégies optimales pour différents paramètres à travers des simulations numériques. La valeur du portefeuille pour les fonctions logarithmiques d’utilité est également obtenue. Enfin, nous concluons nos travaux et présentons nos perspectives dans le chapitre 8
This thesis studies the consumption/investment problem for the spread financial market defined by the Ornstein–Uhlenbeck (OU) process. Recently, the OU process has been used as a proper financial model to reflect underlying prices of assets. The thesis consists of 8 Chapters. Chapter 1 presents a general literature review and a short view of the main results obtained in this work where different utility functions have been considered. The optimal consumption/investment strategy are studied in Chapter 2 for the power utility functions for small time interval, that 0 < t < T < T0. Main theorems have been stated and the existence and uniqueness of the solution has been proven. Numeric approximation for the solution of the HJB equation has been studied and the convergence rate has been established. In this case, the convergence rate for the numerical scheme is super geometrical, i.e., more rapid than any geometrical ones. A special verification theorem for this case has been shown. In this chapter, we have studied the Hamilton–Jacobi–Bellman (HJB) equation through the Feynman–Kac (FK) method. The existence and uniqueness theorem for the classical solution for the HJB equation has been shown. Chapter 3 extended our approach from the previous chapter of the optimal consumption/investment strategies for the power utility functions for any time interval where the power utility coefficient γ should be less than 1/4. Chapter 4 addressed the optimal consumption/investment problem for logarithmic utility functions for multivariate OU process in the base of the stochastic dynamical programming method. As well it has been shown a special verification theorem for this case. It has been demonstrated the existence and uniqueness theorem for the classical solution for the HJB equation in explicit form. As a consequence the optimal financial strategies were constructed. Some examples have been stated for a scalar case and for a multivariate case with diagonal volatility. Stochastic volatility markets has been considered in Chapter 5 as an extension for the previous chapter of optimization problem for the logarithmic utility functions. Chapter 6 proposed some auxiliary results and theorems that are necessary for the work. Numerical simulations has been provided in Chapter 7 for power and logarithmic utility functions. The fixed point value h for power utility has been presented. We study the constructed strategies by numerical simulations for different parameters. The value function for the logarithmic utilities has been shown too. Finally, Chapter 8 reflected the results and possible limitations or solutions
APA, Harvard, Vancouver, ISO, and other styles
40

STANBURY, PAMELA COOK. "PROCESSES OF VILLAGE COMMUNITY FORMATION IN AN AGRICULTURAL SETTLEMENT SCHEME: THE INDIRA GANDHI NAHAR PROJECT, INDIA." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184165.

Full text
Abstract:
Anthropological research conducted in the Indira Gandhi Nahar Project area of the western Indian state of Rajasthan during 1984-1985 assessed the impact of agricultural land settlement planning on village community formation. The large-scale project, begun in 1957, has brought irrigation water to the extremely arid Thar desert and has brought irrigation water to the extremely arid Thar desert and has dramatically altered the social and physical landscape. Significant efforts have been made by the Government of Rajasthan to select settlers from the poor and landless population, as part of a social welfare policy, allocate agricultural land to them and create new settler communities. A single village, one of the earliest established by the project, was selected for the study of community formation. Historical and contemporary data were collected on five themes: (1) the settler household, (2) kinship, (3) patronage, (4) institution building, and (5) socieconomic stratification. For each theme area, a series of questions were asked regarding the impact of settlement planning. Although settlement planning has been a major influence on the study village, research revealed that settlers arrived under highly diverse circumstances and played diverse roles in the process of community growth. Research also revealed that the village community has maintained some traditional features of Indian social organization in the face of great upheaval associated with settlement. Both the indigenous families and some of the earliest unplanned settlers have developed large local kinship networks, assumed positions of wealth in a hierarchical caste system, and have been involved in building political institutions based on a stratified system. They have also been responsible for attracting later settlers, including both landless agriculturalists and, to a limited extent, service workers. The settlers selected according to settlement policies have not developed extensive kin networks and have been less active in institution building and developing patronage relationships.
APA, Harvard, Vancouver, ISO, and other styles
41

Tan, Xiaolu. "Stochastic control methods for optimal transportation and probabilistic numerical schemes for PDEs." Palaiseau, Ecole polytechnique, 2011. https://theses.hal.science/docs/00/66/10/86/PDF/These_TanXiaolu.pdf.

Full text
Abstract:
Cette thèse porte sur les méthodes numériques pour les équations aux dérivées partielles (EDP) non-linéaires dégénérées, ainsi que pour des problèmes de contrôle d'EDP non-linéaires résultants d'un nouveau problème de transport optimal. Toutes ces questions sont motivées par des applications en mathématiques financières. La thèse est divisée en quatre parties. Dans une première partie, nous nous intéressons à la condition nécessaire et suffisante de la monotonie du thêta-schéma de différences finies pour l'équation de diffusion en dimension un. Nous donnons la formule explicite dans le cas de l'équation de la chaleur, qui est plus faible que la condition classique de Courant-Friedrichs-Lewy (CFL). Dans une seconde partie, nous considérons une EDP parabolique non-linéaire dégénérée et proposons un schéma de type ''splitting'' pour la résoudre. Ce schéma réunit un schéma probabiliste et un schéma semi-lagrangien. Au final, il peut être considéré comme un schéma Monte-Carlo. Nous donnons un résultat de convergence et également un taux de convergence du schéma. Dans une troisième partie, nous étudions un problème de transport optimal, où la masse est transportée par un processus d'état type ''drift-diffusion'' controllé. Le coût associé est dépendant des trajectoires de processus d'état, de son drift et de son coefficient de diffusion. Le problème de transport consiste à minimiser le coût parmi toutes les dynamiques vérifiant les contraintes initiales et terminales sur les distributions marginales. Nous prouvons une formule de dualité pour ce problème de transport, étendant ainsi la dualité de Kantorovich à notre contexte. La formulation duale maximise une fonction valeur sur l'espace des fonctions continues bornées, et la fonction valeur correspondante à chaque fonction continue bornée est la solution d'un problème de contrôle stochastique optimal. Dans le cas markovien, nous prouvons un principe de programmation dynamique pour ces problèmes de contrôle optimal, proposons un algorithme de gradient projeté pour la résolution numérique du problème dual, et en démontrons la convergence. Enfin dans une quatrième partie, nous continuons à développer l'approche duale pour le problème de transport optimal avec une application à la recherche de bornes de prix sans arbitrage des options sur variance étant donnés les prix des options européennes. Après une première approximation analytique, nous proposons un algorithme de gradient projeté pour approcher la borne et la stratégie statique correspondante en options vanilles
This thesis deals with the numerical methods for a fully nonlinear degenerate parabolic partial differential equations (PDEs), and for a controlled nonlinear PDEs problem which results from a mass transportation problem. The manuscript is divided into four parts. In a first part of the thesis, we are interested in the necessary and sufficient condition of the monotonicity of finite difference thêta-scheme for a one-dimensional diffusion equations. An explicit formula is given in case of the heat equation, which is weaker than the classical Courant-Friedrichs-Lewy (CFL) condition. In a second part, we consider a fully nonlinear degenerate parabolic PDE and propose a splitting scheme for its numerical resolution. The splitting scheme combines a probabilistic scheme and the semi-Lagrangian scheme, and in total, it can be viewed as a Monte-Carlo scheme for PDEs. We provide a convergence result as well as a rate of convergence. In the third part of the thesis, we study an optimal mass transportation problem. The mass is transported by the controlled drift-diffusion dynamics, and the associated cost depends on the trajectories, the drift as well as the diffusion coefficient of the dynamics. We prove a strong duality result for the transportation problem, thus extending the Kantorovich duality to our context. The dual formulation maximizes a value function on the space of all bounded continuous functions, and every value function corresponding to a bounded continuous function is the solution to a stochastic control problem. In the Markovian cases, we prove the dynamic programming principle of the optimal control problems, and we propose a gradient-projection algorithm for the numerical resolution of the dual problem, and provide a convergence result. Finally, in a fourth part, we continue to develop the dual approach of mass transportation problem with its applications in the computation of the model-independent no-arbitrage price bound of the variance option in a vanilla-liquid market. After a first analytic approximation, we propose a gradient-projection algorithm to approximate the bound as well as the corresponding static strategy in vanilla options
APA, Harvard, Vancouver, ISO, and other styles
42

Kamarudin, Faizal. "The development of an effective and efficient dispute resolution processes for strata scheme disputes in peninsular Malaysia." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/69839/1/Faizal_Kamarudin_Thesis.pdf.

Full text
Abstract:
Dispute resolution in strata schemes in Peninsular Malaysia should focus on more than just "settlement." The quality of the outcome, its sustainability and its relevance in supporting the basic principles of a good neighbourhood and self-governance in a strata scheme are also fundamental. Based on the comprehensive law movement, this thesis develops a theoretical framework for strata scheme disputes within the parameters of therapeutic jurisprudence, preventive law, alternative dispute resolution (ADR) and problem-solving courts. The therapeutic orientation of this model offers approaches that promote positive communication between disputing parties, preserve neighbour relations and optimise people's psychological and emotional well-being.
APA, Harvard, Vancouver, ISO, and other styles
43

Myslicki, Stefan Leopold 1953. "A VARIABLE SAMPLING FREQUENCY CUMULATIVE SUM CONTROL CHART SCHEME." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276503.

Full text
Abstract:
This study uses Monte Carlo simulation to examine the performance of a variable frequency sampling cumulative sum control chart scheme for controlling the mean of a normal process. The study compares the performance of the method with that of a standard fixed interval sampling cumulative sum control chart scheme. The results indicate that the variable frequency sampling cumulative sum control chart scheme is superior to the standard cumulative sum control chart scheme in detecting a small to moderate shift in the process mean.
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Tae Hee. "The Korean emissions trading scheme : focusing on accounting issues." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21690.

Full text
Abstract:
The purpose of this study is to examine the accounting standard-setting process in relation to emissions rights and related liabilities in the Korean context in order to provide a better understanding of accounting issues under an emissions trading scheme (ETS). Using an interpretive inductive approach, this study comprises semi-structured, face-to-face interviews and analysis of relevant documents. Interviews were carried out with a wide range of key players, including accounting standard setters (Korean Accounting Standards Board, International Accounting Standards Board, and Autorité des Normes Comptables), accounting experts, industry and government. This study identifies how problematic accounting issues on emissions rights and related liabilities have been addressed by accounting standard setters. The key accounting issues under ETS are linked mainly with free allowances. It is found that accounting standard setters attempt to establish the most appropriate accounting standard under the given circumstances reflecting a variety of considerations, and that the most common elements affecting the development of accounting standards for ETS are the legal and economic context, the existing accounting framework, and preceding models and practices. Nevertheless, these factors affect the development of accounting standards for ETS in different ways. Accordingly, the primary accounting issues on which each standard setter concentrates vary depending on different circumstances and considerations. This study investigates the accounting standard-setting process for emissions rights by Korean accounting standard setters, from the agenda-setting stage to the final publication of the standard. The findings reinforce the importance of political factors in the standard-setting process, including stakeholders’ participation in the process, prominent stakeholders, and the motivation, methods and timing of lobbying activities. In particular, the findings have important implications for the effectiveness of lobbying. Overall, the findings confirm that accounting standards are likely to be the political outcome of interactions between the accounting standard setter and stakeholders. The findings highlight desirable factors for accounting models of emissions rights. Desirability or appropriateness of standard is judged by the extent to which stakeholders in institutional environments consider the promulgation to be legitimate or authoritative. Therefore, accounting standard setters must make greater efforts to encourage stakeholders to participate in the standard-setting process in order to ensure institutional legitimacy. The originality of this study lies in its empirical research on accounting issues for ETS from a practical point of view. In particular, in its timely and detailed investigation of Korean accounting standard setters, this study provides a broader understanding of the accounting standard-setting process in the Korean context. The study also advances legitimacy theory by offering a framework particularly applicable to accounting standard setting process, which also incorporates stakeholder theory research. The study finds support from the framework and further contributes to the related literature by reviewing legitimacy conflicts. From an accounting policy point of view, the findings have implications for both national and international standard setters and provide guidance on how to achieve high-quality accounting standards with a high degree of compliance.
APA, Harvard, Vancouver, ISO, and other styles
45

Zietze, Stefan. "Evaluation, validation and application of an analytical scheme for N-glycosylation analysis used for mammalian cell production processes." [S.l.] : [s.n.], 2006. http://www.diss.fu-berlin.de/2006/375/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kouakou, Kouadio Simplice. "Echantillonnage aléatoire et estimation spectrale de processus et de champs stationnaires." Thesis, Rennes 2, 2012. http://www.theses.fr/2012REN20019.

Full text
Abstract:
Dans ce travail nous nous intéressons à l'estimation de la densité spectrale par la méthode du noyau pour des processus à temps continu et des champs aléatoires observés selon des schémas d'échantillonnage (ou plan d'expériences) discrets aléatoires. Deux types d'échantillonnage aléatoire sont ici considérés : schémas aléatoires dilatés, et schémas aléatoires poissonniens. Aucune condition de gaussiannité n'est imposée aux processus et champs étudiés, les hypothèses concerneront leurs cumulants.En premier nous examinons un échantillonnage aléatoire dilaté utilisé par Hall et Patil (1994) et plus récemment par Matsuda et Yajima (2009) pour l'estimation de la densité spectrale d'un champ gaussien. Nous établissons la convergence en moyenne quadratique dans un cadre plus large, ainsi que la vitesse de convergence de l'estimateur.Ensuite nous appliquons l'échantillonnage aléatoire poissonnien dans deux situations différentes : estimation spectrale d'un processus soumis à un changement de temps aléatoire (variation d'horloge ou gigue), et estimation spectrale d'un champ aléatoire sur R2. Le problème de l'estimation de la densité spectrale d'un processus soumis à un changement de temps est résolu par projection sur la base des vecteurs propres d'opérateurs intégraux définis à partir de la fonction caractéristique de l'accroissement du changement de temps aléatoire. Nous établissons la convergence en moyenne quadratique et le normalité asymptotique de deux estimateurs construits l'un à partir d'une observation continue, et l'autre à partir d'un échantillonnage poissonnien du processus résultant du changement de temps.La dernière partie de ce travail est consacrée au cas d'un champ aléatoire sur R2 observé selon un schéma basé sur deux processus de Poissons indépendants, un pour chaque axe de R2. Les résultats de convergence sont illustrés par des simulations
In this work, we are dealing in the kernel estimation of the spectral density for a continuous time process or random eld observed along random discrete sampling schemes. Here we consider two kind of sampling schemes : random dilated sampling schemes, and Poissonian sampling schemes. There is no gaussian condition for the process or the random eld, the hypotheses apply to their cumulants.First, we consider a dilated sampling scheme introduced by Hall and Patil (1994) and used more recently by Matsuda and Yajima (2009) for the estimation of the spectral density of a Gaussian random eld.We establish the quadratic mean convergence in our more general context, as well as the rate of convergence of the estimator.Next we apply the Poissonian sampling scheme to two different frameworks : to the spectral estimation for a process disturbed by a random clock change (or time jitter), and to the spectral estimation of a random field on R2.The problem of the estimatin of the spectral density of a process disturbed by a clock change is solved with projection on the basis of eigen-vectors of kernel integral operators defined from the characteristic function of the increment of the random clock change. We establish the convergence and the asymptotic normality of two estimators contructed, from a continuous time observation, and the other from a Poissonian sampling scheme observation of the clock changed process.The last part of this work is devoted to random fields on R2 observed along a sampling scheme based on two Poisson processes (one for each axis of R2). The convergence results are illustrated by some simulations
APA, Harvard, Vancouver, ISO, and other styles
47

Martinho, Cláudia Sofia de Sousa. "Representações de acontecimentos: Os efeitos da experiência e da estrutura de acontecimentos na linguagem das crianças." Master's thesis, Instituto Superior de Psicologia Aplicada, 2001. http://hdl.handle.net/10400.12/654.

Full text
Abstract:
Dissertação de Mestrado em Psicologia Educacional
Com este estudo procurou-se compreender como é que as crianças o 1o e do 2o ano de escolaridade representam os acontecimentos e se essas representações funcionam como organizadores cognitivos e facilitadores da resolução de problemas linguísticos. Neste sentido pediu-se a cada criança para relatar dois acontecimentos e para realizar exercícios gramaticais sobre cada um deles. Os acontecimentos propostos foram "Dia de escola" (mais familiar) e "Ida ao supermercado" (menos familiar). Posteriormente compararam-se os relatos dos dois anos de escolaridade relativamente ao grau de complexidade e organização dos scripts produzidos para os dois acontecimentos e a maior ou menor facilidade na resolução de exercícios gramaticais correspondentes aos dois acontecimentos. Segundo autores como Nelson, as crianças mais velhas produzem scripts mais complexos e organizados, devido ao nível de desenvolvimento cognitivo e à maior experiência com os acontecimentos. Todas as crianças produzem scripts mais complexos e organizados quando o script lhes é mais familiar, dado terem maior experiência, o que provoca um maior conhecimento sobre o acontecimento. Os autores defendem também o facto de a representação de acontecimentos ser a primeira base sobre a qual assentam as operações cognitivas da criança, pelo que a realização de tarefas acerca de acontecimentos mais familiares seja mais fácil, pois implica a acção de um esquema representacional mais forte. Os resultados obtidos apontam no sentido de: O script "Dia de escola" mostrou-se mais rico e complexo para os alunos do 2o ano, enquanto que o script "Ida ao supermercado" se mostrou mais rico e complexo para o 1o ano de escolaridade"; Existem diferenças significativas entre "Dia de escola" e "Ida ao supermercado" no que diz respeito ao número de informações e de tempos verbais (presente indicativo e pretérito perfeito); Existem diferenças significativas entre 1o e 2o ano no que diz respeito à utilização dos tempos verbais presente do indicativo, pretérito perfeito e infinito pessoal; As crianças do 2o ano de escolaridade produzem scripts mais complexos e organizados que as crianças do 1o ano; Existem diferenças entre acontecimentos nas questões 1,3,4e5; O script "Dia de escola" teve um efeito facilitador sobre os alunos do 1o ano aquando da resolução dos exercícios; Foi o 2° ano quem apresentou melhores resultados na resolução dos exercícios, para ambos os scripts. Concluímos que as crianças do 1o e 2o ano de escolaridade têm já representatividade de acontecimentos sendo que os scripts produzidos aumentam de complexidade e organização ao longo do crescimento dos sujeitos, mantendo-se relativamente organizados e complexos para acontecimentos familiares. Concluímos também que para estas crianças a representação de acontecimentos funciona como organizador cognitivo e facilitador da resolução de problemas.
APA, Harvard, Vancouver, ISO, and other styles
48

Reid, Norman. "Interpersonal relationship difficulties in borderline personality disorder." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Leach, Christopher. "Multi-Actor Multi-Criteria Decision Analysis of Wind Power Community Benefit Schemes." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-370149.

Full text
Abstract:
Community benefit schemes in the context of wind power are increasingly provisioned by developers as a means of generating local socio-economic and environmental value, fostering social relations and strengthening acceptance. Determining an appropriate and effective benefit scheme can prove challenging, given the variation of exposed stakeholders, diversity in schemes and the lack of decision making guidance. A multi-criteria decision aid framework for identifying the most appropriate scheme(s) for a hypothetical wind power project is developed. The framework is based on AHP and PROMETHEE II decision support tool, where six (6) alternative schemes are assessed using the preferences of five (5) stakeholders and their relevant criteria. The framework was applied to a fictitious development on the island of Gotland. Results from the applied example indicate that the most locally suited outcome was the ownership based models. It is anticipated that the methodological framework can help identify the scheme(s) that respond to the needs and preferences of the locality. Moreover, a decision making platform of this nature can provide practical support to developers, communities and local authorities, and contribute to a more effective and efficient development and negotiation process surrounding community benefit schemes.
APA, Harvard, Vancouver, ISO, and other styles
50

Cienfuegos, Bernardo [Verfasser], Liselotte [Akademischer Betreuer] Schebek, and Andreas [Akademischer Betreuer] Eichhorn. "Analysis and optimization of sustainable transport processes of biomass for power plants / Bernardo Cienfuegos ; Liselotte Schebek, Andreas Eichhorn." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1213908078/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography