Dissertations / Theses on the topic 'Sensitivity Analysis'

To see the other types of publications on this topic, follow the link: Sensitivity Analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sensitivity Analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wan, Din Wan Ibrahim. "Sensitivity analysis intolerance allocation." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675480.

Full text
Abstract:
In Computer Aided Design model the shape is usually defined as a boundary representation, a cell complex of faces, edges, and vertices. The boundary representation is generated from a system of geometric constraints, with parameters as degrees of freedom. The dimensions of the boundary representation are determined by the parameters in the CAD system used to model the part, and every single parametric perturbation may generate different changes in the part model shape and dimensions. Thus, one can compute dimensional sensitivity to parameter perturbations. A "Sensitivity Analysis" method is proposed to automatically quantify the dependencies of the Key Characteristic dimensions on each of the feature parameters in a CAD model. Once the sensitivities of the feature parameters to Key Characteristic dimensions have been determined the appropriate perturbations of each parameter to cause a desired change in critical dimension can be determined. This methodology is then applied to real applications of tolerancing in mechanical assembly models to show the efficiencies of this new developed strategy. The approach can identify where specific tolerances need to be applied to a Key Control Characteristic dimension, the range of part tolerances that could be used to achieve the desired Key Product Characteristic dimension tolerances, and also if existing part tolerances make it impossible to achieve the desired Key Product Characteristic dimension tolerances. This thesis provides an explanation of a novel automated tolerance allocation process for an assembly model based on the parametric CAD sensitivity method. The objective of this research is to expose the relationship between parameters sensitivity of CAD design in mechanical assembly product and tolerance design. This exposes potential new avenues of research in how to develop standard process and methodology for geometrical dimensioning and tolerancing (GD&T) in a digital design tools environment known as Digital MockUp (DMU).
APA, Harvard, Vancouver, ISO, and other styles
2

Poveda, David. "Sensitivity analysis of capital projects." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27990.

Full text
Abstract:
This thesis presents a very generalized model useful in the economic evaluation of capital projects. Net Present Value and the Internal Rate of Return are used as the performance measures. A theoretical framework to perform sensitivity analysis including bivariate analysis and sensitivity to functional forms is presented. During the development of the model, emphasis is given to the financial mechanisms available to fund large capital projects. Also, mathematical functions that can be used to represent cash flow profiles generated in each project phase are introduced. These profiles are applicable to a number of project types including oil and gas, mining, real estate and chemical process projects. Finally, a computer program has been developed which treats most of the theoretical concepts explored in this thesis, and an example of its application is presented. This program constitutes a useful teaching tool.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Faria, Jairo Rocha de. "Second order topological sensitivity analysis." Laboratório Nacional de Computação Científica, 2008. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=141.

Full text
Abstract:
The topological derivative provides the sensitivity of a given shape functional with respect to an infinitesimal non-smooth domain pertubation (insertion of hole or inclusion, for instance). Classically, this derivative comes from the second term of the topological asymptotic expansion, dealing only with inifinitesimal pertubations. However, for pratical applications, we need to insert pertubations of finite sizes.Therefore, we consider other terms in the expansion, leading to the concept of higher-order topological derivatives. In a particular, we observe that the topological-shape sensitivity method can be naturally extended to calculate these new terms, resulting in a systematic methodology to obtain higher-order topological derivatives. In order to present these ideas, initially we apply this technique in some problems with exact solution, where the topological asymptotic expansion is obtained until third order. Later, we calculate first as well as second order topological derivative for the total potential energy associated to the Laplace equation in two-dimensional domain pertubed with the insertion of a hole, considering homogeneous Neumann or Dirichlet boundary conditions, or an inclusion with thermal conductivity coefficient value different from the bulk material. With these results, we present some numerical experiments showing the influence of the second order topological derivative in the topological asymptotic expansion, which has two main features:it allows us to deal with pertubations of finite sizes and provides a better descent direction in optimization and reconstruction algorithms.
A derivada topológica fornece a sensibilidade de uma dada função custo quando uma pertubação não suave e infinitesimal (furo ou inclusão, por exemplo) é introduzida. Classicamente, esta derivada vem do segundo termo da expansão assintótica topológica considerando-se apenas pertubações infinitesimais. No entanto, em aplicações práticas, é necessário considerar pertubação de tamanho finito. Motivado por este fato, o presente trabalho tem como objetivo fundamental introduzir o conceito de derivadas topológicas de ordem superiores, o que permite considerar mais termos na expansão assintótica topológica. Em particular, observa-se que o topological-shape sensitivity method pode ser naturalmente estendido para o cálculo destes novos termos, resultando em uma metodologia sistemática de análise de sensibilidade topológica de ordem superior. Para se apresentar essas idéias, inicialmente essa técnica é verificada através de alguns problemas que admitem solução exata, onde se calcula explicitamente a expansão assintótica topológica até terceira ordem. Posteriormente, considera-se a equação de Laplace bidimensional, cujo domínio é topologicamente pertubado pela introdução de um furo com condição de contorno de Neumann ou de Dirichlet homogêneas, ou ainda de uma inclusão com propriedade física distinta do meio. Nesse caso, são calculadas explicitamente as derivadas topológicas de primeira e segunda ordens. Com os resultados obtidos em todos os casos, estuda-se a influência dos termos de ordem superiores na expansão assintótica topológica, através de experimentos numéricos. Em particular, observa-se que esses novos termos, além de permitir considerar pertubações de tamanho finito, desempenham ainda um importante papel tanto como fator de correção da expansão assintótica topológica, quanto como direção de descida em processos de otimização. Finalmente, cabe mencionar que a metodologia desenvolvida neste trabalho apresenta um grande potencial para aplicação na otimização e em algoritimos de reconstrução.
APA, Harvard, Vancouver, ISO, and other styles
4

Witzgall, Zachary F. "Parametric sensitivity analysis of microscrews." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4892.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 73 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 52-53).
APA, Harvard, Vancouver, ISO, and other styles
5

鄧國良 and Kwok-leong Tang. "Sensitivity analysis of bootstrap methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31977479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fang, Xinding S. M. Massachusetts Institute of Technology. "Sensitivity analysis of fracture scattering." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59789.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 40-42).
We use a 2-D finite difference method to numerically calculate the seismic response of a single finite fracture in a homogeneous media. In our experiments, we use a point explosive source and ignore the free surface effect, so the fracture scattering wave field contains two parts: P-to-P scattering and P-to-S scattering. We vary the fracture compliance within a range considered appropriate for field observations, 10-12 m/Pa to 10-9 m/Pa, and investigate the variation of the scattering pattern of a single fracture as a function of normal and tangential fracture compliance. We show that P-to-P and P-to-S fracture scattering patterns are sensitive to the ratio of normal to tangential fracture compliance and different incident angle, while radiation pattern amplitudes scale as the square of the compliance. We find that, for a vertical fracture system, if the source is located at the surface, most of the energy scattered by a fracture propagates downwards, specifically, the P-to-P scattering energy propagates down and forward while the P-to-S scattering energy propagates down and backward. Therefore, most of the fracture scattered waves observed on the surface are, first scattered by fractures, and then reflected back to the surface by reflectors below the fracture zone, so the fracture scattered waves have complex ray paths and are contaminated by the reflectivity of matrix reflectors.
by Xinding Fang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Masinde, Brian. "Birds' Flight Range. : Sensitivity Analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166248.

Full text
Abstract:
’Flight’ is a program that uses flight mechanics to estimate the flight range of birds. This program, used by ornithologists, is only available for Windows OS. It requires manual imputation of body measurements and constants (one observation at a time) and this is time-consuming. Therefore, the first task is to implement the methods in R, a programming language that runs on various platforms. The resulting package named flying, has three advantages; first, it can estimate flight range of multiple bird observations, second, it makes it easier to experiment with different settings (e.g. constants) in comparison to Flight and third, it is open-source making contribution relatively easy. Uncertainty and global sen- sitivity analyses are carried out on body measurements separately and with various con- stants. In doing so, the most influential body variables and constants are discovered. This task would have been near impossible to undertake using ’Flight’. A comparison is made amongst the results from a crude partitioning method, generalized additive model, gradi- ent boosting machines and quasi-Monte Carlo method. All of these are based on Sobol’s method for variance decomposition. The results show that fat mass drives the simulations with other inputs playing a secondary role (for example mechanical conversion efficiency and body drag coefficient).
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Kwok-leong. "Sensitivity analysis of bootstrap methods." [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13793792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Munster, Drayton William. "Sensitivity Enhanced Model Reduction." Thesis, Virginia Tech, 2013. http://hdl.handle.net/10919/23169.

Full text
Abstract:
In this study, we numerically explore methods of coupling sensitivity analysis to the reduced model in order to increase the accuracy of a proper orthogonal decomposition (POD) basis across a wider range of parameters. Various techniques based on polynomial interpolation and basis alteration are compared. These techniques are performed on a 1-dimensional reaction-diffusion equation and 2-dimensional incompressible Navier-Stokes equations solved using the finite element method (FEM) as the full scale model. The expanded model formed by expanding the POD basis with the orthonormalized basis sensitivity vectors achieves the best mixture of accuracy and computational efficiency among the methods compared.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
10

Konarski, Roman. "Sensitivity analysis for structural equation models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22893.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sulieman, Hana. "Parametric sensitivity analysis in nonlinear regression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0004/NQ27858.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yu, Jianbin. "Flexible reinforced pavement structure-sensitivity analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/MQ52682.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kolen, A. W. J., Kan A. H. G. Rinnooy, Hoesel C. P. M. Van, and Albert Wagelmans. "Sensitivity Analysis of List Scheduling Heuristics." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5268.

Full text
Abstract:
When jobs have to be processed on a set of identical parallel machines so as to minimize the makespan of the schedule, list scheduling rules form a popular class of heuristics. The order in which jobs appear on the list is assumed here to be determined by the relative size of their processing times; well known special cases are the LPT rule and the SPT rule, in which the jobs are ordered according to non-increasing and non-decreasing processing time respectively. When one of the job processing times is gradually increased, the schedule produced by a list scheduling rule will be affected in a manner reflecting its sensitivity to data perturbations. We analyze this phenomenon and obtain analytical support for the intuitively plausible notion that the sensitivity of a list scheduling rule increases with the quality of the schedule produced.
APA, Harvard, Vancouver, ISO, and other styles
14

Maginot, Jeremy. "Sensitivity analysis for multidisciplinary design optmization." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5667.

Full text
Abstract:
When designing a complex industrial product, the designer often has to optimise simultaneously multiple conflicting criteria. Such a problem does not usually have a unique solution, but a set of non-dominated solutions known as Pareto solutions. In this context, the progress made in the development of more powerful but more computationally demanding numerical methods has led to the emergence of multi-disciplinary optimisation (MDO). However, running computationally expensive multi-objective optimisation procedures to obtain a comprehensive description of the set of Pareto solutions might not always be possible. The aim of this research is to develop a methodology to assist the designer in the multi-objective optimisation process. As a result, an approach to enhance the understanding of the optimisation problem and to gain some insight into the set of Pareto solutions is proposed. This approach includes two main components. First, global sensitivity analysis is used prior to the optimisation procedure to identify non- significant inputs, aiming to reduce the dimensionality of the problem. Second, once a candidate Pareto solution is obtained, the local sensitivity is computed to understand the trade-offs between objectives. Exact linear and quadratic approximations of the Pareto surface have been derived in the general case and are shown to be more accurate than the ones found in literature. In addition, sufficient conditions to identify non-differentiable Pareto points have been proposed. Ultimately, this approach enables the designer to gain more knowledge about the multi-objective optimisation problem with the main concern of minimising the computational cost. A number of test cases have been considered to evaluate the approach. These include algebraic examples, for direct analytical validation, and more representative test cases to evaluate its usefulness. In particular, an airfoil design problem has been developed and implemented to assess the approach on a typical engineering problem. The results demonstrate the potential of the methodology to achieve a reduction of computational time by concentrating the optimisation effort on the most significant variables. The results also show that the Pareto approximations provide the designer with essential information about trade-offs at reduced computational cost.
APA, Harvard, Vancouver, ISO, and other styles
15

North, Simon John. "High sensitivity mass spectrometric glycoprotein analysis." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404993.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Maginot, Jeremy. "Sensitivity analysis for multidisciplinary design optimization." Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5667.

Full text
Abstract:
When designing a complex industrial product, the designer often has to optimise simultaneously multiple conflicting criteria. Such a problem does not usually have a unique solution, but a set of non-dominated solutions known as Pareto solutions. In this context, the progress made in the development of more powerful but more computationally demanding numerical methods has led to the emergence of multi-disciplinary optimisation (MDO). However, running computationally expensive multi-objective optimisation procedures to obtain a comprehensive description of the set of Pareto solutions might not always be possible. The aim of this research is to develop a methodology to assist the designer in the multi-objective optimisation process. As a result, an approach to enhance the understanding of the optimisation problem and to gain some insight into the set of Pareto solutions is proposed. This approach includes two main components. First, global sensitivity analysis is used prior to the optimisation procedure to identify non- significant inputs, aiming to reduce the dimensionality of the problem. Second, once a candidate Pareto solution is obtained, the local sensitivity is computed to understand the trade-offs between objectives. Exact linear and quadratic approximations of the Pareto surface have been derived in the general case and are shown to be more accurate than the ones found in literature. In addition, sufficient conditions to identify non-differentiable Pareto points have been proposed. Ultimately, this approach enables the designer to gain more knowledge about the multi-objective optimisation problem with the main concern of minimising the computational cost. A number of test cases have been considered to evaluate the approach. These include algebraic examples, for direct analytical validation, and more representative test cases to evaluate its usefulness. In particular, an airfoil design problem has been developed and implemented to assess the approach on a typical engineering problem. The results demonstrate the potential of the methodology to achieve a reduction of computational time by concentrating the optimisation effort on the most significant variables. The results also show that the Pareto approximations provide the designer with essential information about trade-offs at reduced computational cost.
APA, Harvard, Vancouver, ISO, and other styles
17

Johnson, Timothy J. "Sensitivity analysis of transputer workfarm topologies." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Khan, Kamil Ahmad. "Sensitivity analysis for nonsmooth dynamic systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98156.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 369-377).
Nonsmoothness in dynamic process models can hinder conventional methods for simulation, sensitivity analysis, and optimization, and can be introduced, for example, by transitions in flow regime or thermodynamic phase, or through discrete changes in the operating mode of a process. While dedicated numerical methods exist for nonsmooth problems, these methods require generalized derivative information that can be difficult to furnish. This thesis presents some of the first automatable methods for computing these generalized derivatives. Firstly, Nesterov's lexicographic derivatives are shown to be elements of the plenary hull of Clarke's generalized Jacobian whenever they exist. Lexicographic derivatives thus provide useful local sensitivity information for use in numerical methods for nonsmooth problems. A vector forward mode of automatic differentiation is developed and implemented to evaluate lexicographic derivatives for finite compositions of simple lexicographically smooth functions, including the standard arithmetic operations, trigonometric functions, exp / log, piecewise differentiable functions such as the absolute-value function, and other nonsmooth functions such as the Euclidean norm. This method is accurate, automatable, and computationally inexpensive. Next, given a parametric ordinary differential equation (ODE) with a lexicographically smooth right-hand side function, parametric lexicographic derivatives of a solution trajectory are described in terms of the unique solution of a certain auxiliary ODE. A numerical method is developed and implemented to solve this auxiliary ODE, when the right-hand side function for the original ODE is a composition of absolute-value functions and analytic functions. Computationally tractable sufficient conditions are also presented for differentiability of the original ODE solution with respect to system parameters. Sufficient conditions are developed under which local inverse and implicit functions are lexicographically smooth. These conditions are combined with the results above to describe parametric lexicographic derivatives for certain hybrid discrete/ continuous systems, including some systems whose discrete mode trajectories change when parameters are perturbed. Lastly, to eliminate a particular source of nonsmoothness, a variant of McCormick's convex relaxation scheme is developed and implemented for use in global optimization methods. This variant produces twice-continuously differentiable convex underestimators for composite functions, while retaining the advantageous computational properties of McCormick's original scheme. Gradients are readily computed for these underestimators using automatic differentiation.
by Kamil Ahmad Khan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Saxena, Vibhu Prakash. "Sensitivity analysis of oscillating hybrid systems." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61899.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 137-140).
Many models of physical systems oscillate periodically and exhibit both discrete-state and continuous-state dynamics. These systems are called oscillating hybrid systems and find applications in diverse areas of science and engineering, including robotics, power systems, systems biology, and so on. A useful tool that can provide valuable insights into the influence of parameters on the dynamic behavior of such systems is sensitivity analysis. A theory for sensitivity analysis with respect to the initial conditions and/or parameters of oscillating hybrid systems is developed and discussed. Boundary-value formulations are presented for initial conditions, period, period sensitivity and initial conditions for the sensitivities. A difference equation analysis of general homogeneous equations and parametric sensitivity equations with linear periodic piecewise continuous coefficients is presented. It is noted that the monodromy matrix for these systems is not a fundamental matrix evaluated after one period, but depends on one. A three part decomposition of the sensitivities is presented based on the analysis. These three parts classify the influence of the parameters on the period, amplitude and relative phase of the limit-cycles of hybrid systems, respectively. The theory developed is then applied to the computation of sensitivity information for some examples of oscillating hybrid systems using existing numerical techniques and methods. The relevant information given by the sensitivity trajectory and its parts can be used in algorithms for different applications such as parameter estimation, control system design, stability analysis and dynamic optimization.
by Vibhu Prakash Saxena.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
20

Siannis, Fotios. "Sensitivity analysis for correlated survival models." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/78861/.

Full text
Abstract:
In this thesis we introduce a model for informative censoring. We assume that the joint distribution of the failure and the censored times depends on a parameter δ, which is actually a measure of the possible dependence, and a bias function B(t,θ). Knowledge of δ means that the joint distribution is fully specified, while B(t,θ) can be any function of the failure times. Being unable to draw inferences about δ, we perform a sensitivity analysis on the parameters of interest for small values of δ, based on a first order approximation. This will give us an idea of how robust our estimates are in the presence of small dependencies, and whether the ignorability assumption can lead to misleading results. Initially we propose the model for the general parametric case. This is the simplest possible case and we explore the different choices for the standardized bias function. After choosing a suitable function for B(t,θ) we explore the potential interpretation of δ through it's relation to the correlation between quantities of the failure and the censoring processes. Generalizing our parametric model we propose a proportional hazards structure, allowing the presence of covariates. At this stage we present a data set from a leukemia study in which the knowledge, under some certain assumptions, of the censored and the death times of a number of patients allows us to explore the impact of informative censoring to our estimates. Following the analysis of the above data we introduce an extension to Cox's partial likelihood, which will call "modified Cox's partial likelihood", based on the assumptions that censored times do contribute information about the parameters of interest. Finally we perform parametric bootstraps to assess the validity of our model and to explore up to what values of parameter δ our approximation holds.
APA, Harvard, Vancouver, ISO, and other styles
21

Santos, Miguel Duque. "UK pension funds : liability sensitivity analysis." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19509.

Full text
Abstract:
Mestrado em Mathematical Finance
No Reino Unido, muitos empregadores oferecem aos seus empregados algum tipo de regime de pensões profissionais. Um destes tipos é o regime de pensões de benefício definido, isto é, quando um empregador promete pagar uma certa quantidade (definida) de benefícios de pensão ao empregado baseado no salário final e nos anos de serviço. Deste modo, neste tipo de regime de pensão profissional, o empregador suporta todo o risco, porque tem de garantir o pagamento dos benefícios de reforma aos membros quando eles vencem. Os atuários conseguem estimar os pagamentos futuros e descontá-los para a data atual. Este valor atual dos pagamentos futuros é chamado de responsabilidade e pode ser comparado com o montante de ativos para verificar se há dinheiro suficiente no presente para pagar os benefícios futuros prometidos. Contudo, a responsabilidade está sujeita a variações ao longo do tempo porque está exposta ao risco de juros e inflação. Tendo isto em conta, a Mercer desenvolveu uma estratégia de investimento sofisticada chamada "Liability Benchmark Portfolio" ou LBP que é uma carteira de investimentos de baixo risco composta por obrigações do governo de cupão zero que vão igualar aproximadamente as sensibilidades das responsabilidades a mudanças da taxa de inflação e de juro. A minha tarefa no estágio era calcular estas sensibilidades das responsabilidades, que são necessárias para que a equipa de investimentos consiga construir um LBP. Sendo assim, o risco vai ser reduzido e estamos mais perto de assegurar que os membros do fundo recebam os seus benefícios de pensão.
In the United Kingdom, most employers offer their employees some type of occupational pension scheme. One of these types is a Defined Benefit pension plan, this is when an employer promises to pay a certain (defined) amount of pension benefit to the employee based on the final salary and years of service. So, in this type of occupational pension scheme, the employers bear all the risk, as they have to ensure the payment of the retirement benefits to the members when they fall due. The Actuaries are able to estimate the future payments and discount them to a current date. This present value of the future payments is called the liability and can be compared with the amount of assets to check there is enough money in the present to pay the promised future benefits. However, the liability is subject to variation over time because it is exposed to interest and inflation risk. Taking this into account, Mercer developed a sophisticated investment strategy called the Liability Benchmark Portfolio or LBP which is a low risk investment portfolio composed by zero coupon government bonds that will closely match the sensitivities of the liabilities to shifts in the inflation and interest rate. My task in the internship was to calculate these sensitivities of the liabilities that are required by the investment team to be able to build an LBP. Therefore, the risk will be reduced and we are closer to ensure that the members of the fund will receive their pension benefits.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
22

Sen, Sharma Pradeep Kumar. "Sensitivity analysis of ship longitudinal strength." Thesis, Virginia Tech, 1988. http://hdl.handle.net/10919/45183.

Full text
Abstract:
The present work addresses the usefulness of a simple and efficient computer program (ULTSTR) for a sensitivity analysis of ship longitudinal strength, where this program was originally developed for calculating the collapse moment. Since the program is efficient it can be used to obtain ultimate strength variability for various values of parameters which affects the longitudinal strength, viz., yield. stress, Young's modulus, thickness, initial imperfections, breadth, depth, etc. The results obtained with this approach are in good agreement with those obtained by use of a more complex nonlinear finite element program USAS, developed by American Bureau of Shipping.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

DeBrunner, Victor Earl. "Sensitivity analysis of digital filter structures." Thesis, Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/104319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wycoff, Nathan Benjamin. "Gradient-Based Sensitivity Analysis with Kernels." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104683.

Full text
Abstract:
Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions.
Doctor of Philosophy
Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result.
APA, Harvard, Vancouver, ISO, and other styles
25

Kern, Simon. "Sensitivity Analysis in 3D Turbine CFD." Thesis, KTH, Mekanik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210821.

Full text
Abstract:
A better understanding of turbine performance and its sensitivity to variations in the inletboundary conditions is crucial in the quest of further improving the efficiency of aero engines.Within the research efforts to reach this goal, a high-pressure turbine test rig has been designedby Rolls-Royce Deutschland in cooperation with the Deutsches Zentrum für Luft- und Raumfahrt(DLR), the German Aerospace Center. The scope of the test rig is high-precision measurement ofaerodynamic efficiency including the effects of film cooling and secondary air flows as well as theimprovement of numerical prediction tools, especially 3D Computational Fluid Dynamics (CFD).A sensitivity analysis of the test rig based on detailed 3D CFD computations was carried outwith the aim to quantify the influence of inlet boundary condition variations occurring in the testrig on the outlet capacity of the first stage nozzle guide vane (NGV) and the turbine efficiency.The analysis considered variations of the cooling and rimseal leakage mass flow rates as well asfluctuations in the inlet distributions of total temperature and pressure. The influence of anincreased rotor tip clearance was also studied.This thesis covers the creation, calibration and validation of the steady state 3D CFD modelof the full turbine domain. All relevant geometrical details of the blades, walls and the rimsealcavities are included with the exception of the film cooling holes that are replaced by a volumesource term based cooling strip model to reduce the computational cost of the analysis. Thehigh-fidelity CFD computation is run only on a sample of parameter combinations spread overthe entire input parameter space determined using the optimal latin hypercube technique. Thesubsequent sensitivity analysis is based on a Kriging response surface model fit to the sampledata. The results are discussed with regard to the planned experimental campaign on the test rigand general conclusions concerning the impacts of the studied parameters on turbine performanceare deduced.
APA, Harvard, Vancouver, ISO, and other styles
26

Issac, Jason Cherian. "Sensitivity analysis of wing aeroelastic responses." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-06062008-164301/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kennedy, Christopher Brandon. "GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis." Thesis, North Carolina State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3710730.

Full text
Abstract:

Introduced in this dissertation is a novel approach that forms a reduced-order model (ROM), based on subspace methods, that allows for the generation of response sensitivity profiles without the need to set up or solve the generalized inhomogeneous perturbation theory (GPT) equations. The new approach, denoted hereinafter as the generalized perturbation theory free (GPT-Free) approach, computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error associated with the ROM is quantified by means of a Wilks’ order statistics error metric denoted by the κ-metric.

Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally overwhelming. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT (inhomogenous) capabilities unless envisioned during code development. Additionally, codes that use a stochastic algorithm, i.e. Monte Carlo methods, may have difficult or undefined GPT equations. When GPT calculations are available through software, the aforementioned efficiency gained from the GPT approach diminishes when the model has both many output responses and many input parameters. The GPT-Free approach addresses these limitations, first by only requiring the ability to compute the fundamental adjoint from perturbation theory, and second by constructing a ROM from fundamental adjoint calculations, constraining input parameters to a subspace. This approach bypasses the requirement to perform GPT calculations while simultaneously reducing the number of simulations required.

In addition to the reduction of simulations, a major benefit of the GPT-Free approach is explicit control of the reduced order model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the κ-metric, developed from Wilks’ order statistics, on the user-defined response functionals that involve the flux state-space. Because the κ-metric is formed from Wilks’ order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error.

This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

APA, Harvard, Vancouver, ISO, and other styles
28

Guo, Jia. "Uncertainty analysis and sensitivity analysis for multidisciplinary systems design." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2008. http://scholarsmine.mst.edu/thesis/pdf/Guo_09007dcc8066e905.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Missouri University of Science and Technology, 2008.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed May 28, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
29

Ekberg, Marie. "Sensitivity analysis of optimization : Examining sensitivity of bottleneck optimization to input data models." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12624.

Full text
Abstract:
The aim of this thesis is to examine optimization sensitivity in SCORE to the accuracy of particular input data models used in a simulation model of a production line. The purpose is to evaluate if it is sufficient to model input data using sample mean and default distributions instead of fitted distributions. An existing production line has been modeled for the simulation study. SCORE is based on maximizing any key performance measure of the production line while simultaneously minimizing the number of improvements necessary to achieve maximum performance. The sensitivity to the input models should become apparent the more changes required. The experiments concluded that the optimization struggles to obtain convergence when fitted distribution models were used. Configuring the input parameters to the optimization might yield better optimization result. The final conclusion is that the optimization is sensitive to what input data models are used in the simulation model.
APA, Harvard, Vancouver, ISO, and other styles
30

Reineke, Jan. "Caches in WCET analysis : predictability, competitiveness, sensitivity /." Berlin : epubli, 2008. http://www.epubli.de/shop/showshopelement?pubId=882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Chaban, Habib Fady Ruben. "A numerical sensitivity analysis of streamline simulation." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1541.

Full text
Abstract:
Nowadays, field development strategy has become increasingly dependent on the results of reservoir simulation models. Reservoir studies demand fast and efficient results to make investment decisions that require a reasonable trade off between accuracy and simulation time. One of the suitable options to fulfill this requirement is streamline reservoir simulation technology, which has become very popular in the last few years. Streamline (SL) simulation provides an attractive alternative to conventional reservoir simulation because SL offers high computational efficiency and minimizes numerical diffusion and grid orientation effects. However, streamline methods have weaknesses incorporating complex physical processes and can also suffer numerical accuracy problems. The main objective of this research is to evaluate the numerical accuracy of the latest SL technology, and examine the influence of different factors that may impact the solution of SL simulation models. An extensive number of numerical experiments based on sensitivity analysis were performed to determine the effects of various influential elements on the stability and results of the solution. Those experiments were applied to various models to identify the impact of factors such as mobility ratios, mapping of saturation methods, number of streamlines, time step sizes, and gravity effects. This study provides a detailed investigation of some fundamental issues that are currently unresolved in streamline simulation.
APA, Harvard, Vancouver, ISO, and other styles
32

Gajev, Ivan. "Sensitivity and Uncertainty Analysis of BWR Stability." Licentiate thesis, KTH, Kärnkraftsäkerhet, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26387.

Full text
Abstract:
Best Estimate codes are used for licensing, but with conservative assumptions. It is claimed that the uncertainties are covered by the conservatism of the calculation. As Nuclear Power Plants are applying for power up-rates and life extension, evaluation of the uncertainties could help improve the performance, while staying below the limit of the safety margins.   Given the problem of unstable behavior of Boiling Water Reactors (BWRs), which is known to occur during operation at certain power and flow conditions, it could cause SCRAM and decrease the economic performance of the plant. Performing an uncertainty analysis for BWR stability would give better understating of the phenomenon and it would help to verify and validate (V&V) the codes used to predict the NPP behavior.   This thesis reports an uncertainty study of the impact of Thermal-Hydraulic, Neutronic, and Numerical parameters on the prediction of the stability of the BWR within the framework of OECD Ringhals-1 stability benchmark. The time domain code TRACE/PARCS was used in the analysis. This thesis is divided in two parts: Sensitivity study on Numerical Discretization Parameters (Nodalization, Time Step, etc.) and Uncertainty part.   A Sensitivity study was done for the Numerical Parameters (Nodalization and Time step). This was done by refining all possible components until obtaining Space-Time Converged Solution, i.e. further refinement doesn’t change the solution. When the space-time converged solution was compared to the initial discretization, a much better solution has been obtained for both the stability measures (Decay Ratio and Frequency) with the space-time converged model.   Further on, important Neutronic and Thermal-Hydraulic Parameters were identified and the uncertainty calculation was performed using the Propagation of Input Errors (PIE) methodology. This methodology, also known as the GRS method, has been used because it has been tested and extensively verified by the industry, and because it allows identifying the most influential parameters using the Spearman Rank Correlation.
QC 20101126
APA, Harvard, Vancouver, ISO, and other styles
33

Nahum, Carole. "Second order sensitivity analysis in mathematical programming." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74349.

Full text
Abstract:
We consider a nonlinear mathematical program, with twice continuously differentiable functions.
If a point x$ sb0$ does not satisfy a certain Second Order Sufficient Condition (SOS) for optimality (that does not require any constraint qualification, see, e.g., BEN-ISRAEL, BEN-TAL and ZLOBEC (81)), then we prove that the knowledge of the second order properties (derivative, Hessian) of the functions is not enough to conclude that the point is optimal.
When the functions are continuously perturbed, what is the local behavior of an optimal solution x$ sb0$ and of the associate optimal value? The stability and sensitivity of the mathematical model are addressed. We present a new method for solving this problem. Our approach does not rely on the classical Lagrangian coefficients (which cannot be always defined) but rather on power series expansions because we use the primal formulations of optimality.
In the regular case, when Strict complementarity slackness holds, we recover Fiacco's results (FIACCO (83)). On the other hand, when Strict complementarity slackness does not hold, we extensively generalize Shapiro's Theorems (SHAPIRO (85)) since we do not assume Robinson's second order condition (ROBINSON (80)) but the SOS condition.
In the non-regular case, no general algorithm for computing the derivative of the optimizing point with respect to the parameters had been presented up to now.
The approach is extended to analyze the evolution of the set of Pareto minima of a multiobjective nonlinear program. In particular, we define the derivative of a point-to-set map. Our notion seems more adequate than the contingent derivative (AUBIN (81)), though the latter can easily be deduced from the former. This allows to get information about the sensitivity of the set of Pareto minima. A real-life example shows the usefulness and the simplicity of our results. Also, an application of our method to industry planning (within a general framework of Input Optimization) is made in the ideal case of a linear model.
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, QiongLi. "Sensitivity Analysis for Functional Structural Plant Modelling." Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00719935.

Full text
Abstract:
Global sensitivity analysis has a key role to play in the design and parameterization of functional-structural plant growth models (FSPM) which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). Models of this type generally describe many interacting processes, count a large number of parameters, and their computational cost can be important. The general objective of this thesis is to develop a proper methodology for the sensitivity analysis of functional structural plant models and to investigate how sensitivity analysis can help for the design and parameterization of such models as well as providing insights for the understanding of underlying biological processes. Our contribution can be summarized in two parts: from the methodology point of view, we first improved the performance of the existing Sobol's method to compute sensitivity indices in terms of computational efficiency, with a better control of the estimation error for Monte Carlo simulation, and we also designed a proper strategy of analysis for complex biophysical systems; from the application point of view, we implemented our strategy for 3 FSPMs with different levels of complexity, and analyzed the results from different perspectives (model parameterization, model diagnosis).
APA, Harvard, Vancouver, ISO, and other styles
35

braswell, tom. "SPACECRAFT LOADS PREDICTIONVIA SENSITIVITY ANALYSIS AND OPTIMIZATION." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3532.

Full text
Abstract:
Discrepancies between the predicted responses of a finite element analysis (FEA) and reference data from test results arise for many reasons. Some are due to measurement errors, such as inaccurate sensors, noise in the acquisition system or environmental effects. Some are due to analyst errors precipitated by a lack of familiarity with the modeling or solver software. Still others are introduced by uncertainty in the governing physical relations (linear versus non-linear behavior), boundary conditions or the element material/geometrical properties. It is the uncertainty effects introduced by this last group that this study seeks to redress. The objective is the obtainment of model improvements that will reduce errors in predicted versus measured responses. This technique, whereby measured structural data is used to correct finite element model (FEM) errors, has become known as "model updating". Model updating modifies any or all of the mass, stiffness, and damping parameters of a FEM until an improved agreement between the FEA data and test data is achieved. Unlike direct methods, producing a mathematical model representing a given state, the goal of FE model updating is to achieve an improved match between model and test data by making physically meaningful changes. This study replaces measured responses by reference output obtained from a FEA of a small spacecraft. This FEM is referred to as the "Baseline" model. A "Perturbed" model is created from this baseline my making prescribed changes to the various component masses. The degree of mass variation results from the level of confidence existing at a mature stage of the design ii iii cycle. Statistical mean levels of confidence are assigned based on the type of mass of which there are three types: •Concentrated masses – nonstructural, lumped mass formulation (uncoupled) •Smeared masses – nonstructural mass over length or area, lumped mass formulation (uncoupled) •Mass density – volumetric mass, lumped mass formulation (uncoupled) A methodology is presented that accurately predicts the forces occurring at the interface between the spacecraft and the launch vehicle. The methodology quantifies the relationships between spacecraft mass variations and the interface accelerations in the form of sensitivity coefficients. These coefficients are obtained by performing design sensitivity /optimization analyses while updating the Perturbed model to correlate with the Baseline model. The interface forces are responses obtained from a frequency response analysis that runs within the optimization analysis. These forces arise due to the imposition of unit white noise applied across a frequency range extending up to 200 hertz, a cut-off frequency encompassing the lift-off energy required to elicit global mass response. The focus is on lift-off as it is characterized by base excitation, which produces the largest interface forces.
M.S.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering MS
APA, Harvard, Vancouver, ISO, and other styles
36

Rios, Insua David. "Sensitivity analysis in multi-objective decision making." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yin, Peng. "Local sensitivity analysis and bias model selection." Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2385.

Full text
Abstract:
Incomplete data analysis is often considered with other problems such as model uncertainty or non-identi ability. In this thesis I will use the idea of the local sensitivity analysis to address problems under both ignorable and non-ignorable missing data assumptions. One problem with ignorable missing data is the uncertainty for covariate density. At the mean time, the misspeci cation for the missing data mechanism may happen as well. Incomplete data biases are then caused by di erent sources and we aim to evaluate these biases and interpret them via bias parameters. Under non-ignorable missing data, the bias analysis can also be applied to analyse the di erence from ignorability, and the missing data mechanism misspeci cation will be our primary interest in this case. Monte Carlo sensitivity analysis is proposed and developed to make bias model selection. This method combines the idea of conventional sensitivity analysis and Bayesian sensitivity analysis, with the imputation procedure and the bootstrap method used to simulate the incomplete dataset. The selection of bias models is based on the measure of the observation dataset and the simulated incomplete dataset by using K nearest neighbour distance. We further discuss the non-ignorable missing data problem under a selection model, with our developed sensitivity analysis method used to identify the bias parameters in the missing data mechanism. Finally, we discuss robust con dence intervals in meta-regression models with publication bias and missing confounder.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhu, Yitao. "Sensitivity Analysis and Optimization of Multibody Systems." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/71649.

Full text
Abstract:
Multibody dynamics simulations are currently widely accepted as valuable means for dynamic performance analysis of mechanical systems. The evolution of theoretical and computational aspects of the multibody dynamics discipline make it conducive these days for other types of applications, in addition to pure simulations. One very important such application is design optimization for multibody systems. Sensitivity analysis of multibody system dynamics, which is performed before optimization or in parallel, is essential for optimization. Current sensitivity approaches have limitations in terms of efficiently performing sensitivity analysis for complex systems with respect to multiple design parameters. Thus, we bring new contributions to the state-of-the-art in analytical sensitivity approaches in this study. A direct differentiation method is developed for multibody dynamic models that employ Maggi's formulation. An adjoint variable method is developed for explicit and implicit first order Maggi's formulations, second order Maggi's formulation, and first and second order penalty formulations. The resulting sensitivities are employed to perform optimization of different multibody systems case studies. The collection of benchmark problems includes a five-bar mechanism, a full vehicle model, and a passive dynamic robot. The five-bar mechanism is used to test and validate the sensitivity approaches derived in this paper by comparing them with other sensitivity approaches. The full vehicle system is used to demonstrate the capability of the adjoint variable method based on the penalty formulation to perform sensitivity analysis and optimization for large and complex multibody systems with respect to multiple design parameters with high efficiency. In addition, a new multibody dynamics software library MBSVT (Multibody Systems at Virginia Tech) is developed in Fortran 2003, with forward kinematics and dynamics, sensitivity analysis, and optimization capabilities. Several different contact and friction models, which can be used to model point contact and surface contact, are developed and included in MBSVT. Finally, this study employs reference point coordinates and the penalty formulation to perform dynamic analysis for the passive dynamic robot, simplifying the modeling stage and making the robotic system more stable. The passive dynamic robot is also used to test and validate all the point contact and surface contact models developed in MBSVT.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Svenson, Joshua. "Computer Experiments: Multiobjective Optimization and Sensitivity Analysis." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1306361734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hakami, Amir. "Direct sensitivity analysis in air quality models." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180202/unrestricted/hakami%5Famir%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Garmroodi, Doiran Mehdi. "Sensitivity Analysis for Future Grid Stability Studies." Thesis, The University of Sydney, 2016. http://hdl.handle.net/2123/15978.

Full text
Abstract:
The increasing penetration of converter-interfaced generators (CIGs) has raised concerns about the stability and security of future grids (FGs). These resources affect power systems dynamics in many ways including reducing system inertia, interacting with existing generators, changing power flow paths, etc. In this thesis, we carry out a sensitivity study to explore the structural impacts from CIGs on the damping and frequency stability of power systems. Initially, we study the impact of the intermittent power from wind turbine generators (WTGs) on the damping of the electromechanical oscillations in power systems. It will be shown that the inability of WTGs to provide synchronizing and damping torque to the system jeopardize the small signal stability of power systems. Stable operation regions, in terms of wind penetration and tie-line power, are derived and the impact of load flexibility on these regions are discussed. Next, we have studied the impact of the inertia distribution on the damping of the inter-area modes in power systems. It is shown that tie-line power has a significant role on the damping of the inter-area modes. Moreover, we show that dynamic voltage control and inertia emulation can be utilized to improve the damping of the system. By developing an oscillatory recovery model for power system loads, we have also studied the impact of load oscillations on the damping of the inter-area modes. It is shown that the load dynamics can have a significant influence on the electromechanical oscillations of power systems. Finally, the frequency support capability of WTGs is investigated and the performance of different techniques in utilizing the kinetic energy of the WTGs to assist the frequency stability of power systems is evaluated. A novel time-variable droop characteristic is proposed to enhance the contribution of WTGs in supporting system frequency.
APA, Harvard, Vancouver, ISO, and other styles
42

Tiscareno-Lopez, Mario 1957. "Sensitivity analysis of the WEPP Watershed model." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/292034.

Full text
Abstract:
Uncertainty in the hydrologic and soil erosion predictions of the WEPP Watershed model due to errors in model parameter estimation was identified through a sensitivity analysis based on the Monte-Carlo method. Identification of parameter sensitivities provides guidance in the collection of parameter data in places where the model is intended to simulate soil erosion. Changes in model predictions caused by changes in model parameters were quantified for model applications in semi-arid rangeland watersheds. The magnitude of the changes in model parameters was defined by the spatial variability of parameters in a watershed. Model sensitivities in predicting overland flow and soil erosion on hillslopes and channels are presented considering rainfall characteristics. The results show that WEPP predictions are very sensitive to attributes that define a storm event (amount, duration, and ip). Model sensitivity to soil erosion parameters also depends of the type of storm event.
APA, Harvard, Vancouver, ISO, and other styles
43

Capozzi, Marco G. F. "FINITE ELEMENT ANALYSIS AND SENSITIVITY ANALYSIS FOR THE POTENTIAL EQUATION." MSSTATE, 2004. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04222004-131403/.

Full text
Abstract:
A finite element solver has been developed for performing analysis and sensitivity analysis with Poisson's equation. An application of Poisson's equation in fluid dynamics is that of poential flow, in which case Posson's equaiton reduces to Laplace's equation. The stiffness matrix and sensitivity of the stiffness matrix are evaluated by direct integrations, as opposed to numerical integration. This allows less computational effort and minimizes the sources of computational errors. The capability of evaluating sensitivity derivatives has been added in orde to perform design sensitivity analysis of non-lifting airfoils. The discrete-direct approach to sensitivity analysis is utilized in the current work. The potential flow equations and the sensitivity equations are computed by using a preconditionaed conjugate gradient method. This method greatly reduces the time required to perfomr analysis, and the subsequent design optimization. Airfoil shape is updated at each design iteratioan by using a Bezier-Berstein surface parameterization. The unstrucured grid is adapted considering the mesh as a system of inteconnected springs. Numerical solutions from the flow solver are compared with analytical results obtained for a Joukowsky airfoil. Sensitivity derivaatives are validated using carefully determined central finite difference values. The developed software is then used to perform inverse design of a NACA 0012 and a multi-element airfoil.
APA, Harvard, Vancouver, ISO, and other styles
44

Rapadamnaba, Robert. "Uncertainty analysis, sensitivity analysis, and machine learning in cardiovascular biomechanics." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS058.

Full text
Abstract:
Cette thèse fait suite à une étude récente, menée par quelques chercheurs de l'Université de Montpellier, dans le but de proposer à la communauté scientifique une procédure d'inversion capable d'estimer de manière non invasive la pression dans les artères cérébrales d'un patient.Son premier objectif est, d'une part, d'examiner la précision et la robustesse de la procédure d'inversion proposée par ces chercheurs, en lien avec diverses sources d'incertitude liées aux modèles utilisés, aux hypothèses formulées et aux données cliniques du patient, et d'autre part, de fixer un critère d'arrêt pour l'algorithme basé sur le filtre de Kalman d'ensemble utilisé dans leur procédure d'inversion. À cet effet, une analyse d'incertitude et plusieurs analyses de sensibilité sont effectuées. Le second objectif est d'illustrer comment l'apprentissage machine, orienté réseaux de neurones convolutifs, peut être une très bonne alternative à la longue et coûteuse procédure mise en place par ces chercheurs pour l'estimation de la pression.Une approche prenant en compte les incertitudes liées au traitement des images médicales du patient et aux hypothèses formulées sur les modèles utilisés, telles que les hypothèses liées aux conditions limites, aux paramètres physiques et physiologiques, est d'abord présentée pour quantifier les incertitudes sur les résultats de la procédure. Les incertitudes liées à la segmentation des images sont modélisées à l'aide d'une distribution gaussienne et celles liées au choix des hypothèses de modélisation sont analysées en testant plusieurs scénarios de choix d'hypothèses possibles. De cette démarche, il ressort que les incertitudes sur les résultats de la procédure sont du même ordre de grandeur que celles liées aux erreurs de segmentation. Par ailleurs, cette analyse montre que les résultats de la procédure sont très sensibles aux hypothèses faites sur les conditions aux limites du modèle du flux sanguin. En particulier, le choix des conditions limites symétriques de Windkessel pour le modèle s'avère être le plus approprié pour le cas du patient étudié.Ensuite, une démarche permettant de classer les paramètres estimés à l'aide de la procédure par ordre d'importance et de fixer un critère d'arrêt pour l'algorithme utilisé dans cette procédure est proposée. Les résultats de cette stratégie montrent, d'une part, que la plupart des résistances proximales sont les paramètres les plus importants du modèle pour l'estimation du débit sanguin dans les carotides internes et, d'autre part, que l'algorithme d'inversion peut être arrêté dès qu'un certain seuil de convergence raisonnable de ces paramètres les plus influents est atteint.Enfin, une nouvelle plateforme numérique basée sur l'apprentissage machine permettant d'estimer la pression artérielle spécifique au patient dans les artères cérébrales beaucoup plus rapidement qu'avec la procédure d'inversion mais avec la même précision, est présentée. L'application de cette plateforme aux données du patient utilisées dans la procédure d'inversion permet une estimation non invasive et en temps réel de la pression dans les artères cérébrales du patient cohérente avec l'estimation de la procédure d'inversion
This thesis follows on from a recent study conducted by a few researchers from University of Montpellier, with the aim of proposing to the scientific community an inversion procedure capable of noninvasively estimating patient-specific blood pressure in cerebral arteries. Its first objective is, on the one hand, to examine the accuracy and robustness of the inversion procedure proposed by these researchers with respect to various sources of uncertainty related to the models used, formulated assumptions and patient-specific clinical data, and on the other hand, to set a stopping criterion for the ensemble Kalman filter based algorithm used in their inversion procedure. For this purpose, uncertainty analysis and several sensitivity analyses are carried out. The second objective is to illustrate how machine learning, mainly focusing on convolutional neural networks, can be a very good alternative to the time-consuming and costly inversion procedure implemented by these researchers for cerebral blood pressure estimation.An approach taking into account the uncertainties related to the patient-specific medical images processing and the blood flow model assumptions, such as assumptions about boundary conditions, physical and physiological parameters, is first presented to quantify uncertainties in the inversion procedure outcomes. Uncertainties related to medical images segmentation are modelled using a Gaussian distribution and uncertainties related to modeling assumptions choice are analyzed by considering several possible hypothesis choice scenarii. From this approach, it emerges that the uncertainties on the procedure results are of the same order of magnitude as those related to segmentation errors. Furthermore, this analysis shows that the procedure outcomes are very sensitive to the assumptions made about the model boundary conditions. In particular, the choice of the symmetrical Windkessel boundary conditions for the model proves to be the most relevant for the case of the patient under study.Next, an approach for ranking the parameters estimated during the inversion procedure in order of importance and setting a stopping criterion for the algorithm used in the inversion procedure is presented. The results of this strategy show, on the one hand, that most of the model proximal resistances are the most important parameters for blood flow estimation in the internal carotid arteries and, on the other hand, that the inversion algorithm can be stopped as soon as a certain reasonable convergence threshold for the most influential parameter is reached.Finally, a new numerical platform, based on machine learning and allowing to estimate the patient-specific blood pressure in the cerebral arteries much faster than with the inversion procedure but with the same accuracy, is presented. The application of this platform to the patient-specific data used in the inversion procedure provides noninvasive and real-time estimate of patient-specific cerebral pressure consistent with the inversion procedure estimation
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Mengchao. "Sensitivity analysis and evolutionary optimization for building design." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16282.

Full text
Abstract:
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).
APA, Harvard, Vancouver, ISO, and other styles
46

Van, Hoesel Stan, and Albert Wagelmans. "Sensitivity Analysis of the Economic Lot-Sizing Problem." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5146.

Full text
Abstract:
In this paper we study sensitivity analysis of the uncapacitated single level economic lot-sizing problem, which was introduced by Wagner and Whitin about thirty years ago. In particular we are concerned with the computation of the maximal ranges in which the numerical problem parameters may vary individually, such that a solution already obtained remains optimal. Only recently it was discovered that faster algorithms than the Wagner-Whitin algorithm exist to solve the economic lot-sizing problem. Moreover, these algorithms reveal that the problem has more structure than was recognized so far. When performing the sensitivity analysis we exploit these newly obtained insights.
APA, Harvard, Vancouver, ISO, and other styles
47

Barthelemy, Bruno. "Accuracy analysis of the semi-analytical method for shape sensitivity analysis." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/74754.

Full text
Abstract:
The semi-analytical method, widely used for calculating derivatives of static response with respect to design variables for structures modeled by finite elements, is studied in this research. The research shows that the method can have serious accuracy problems for shape design variables in structures modeled by beam, plate, truss, frame, and solid elements. Local and global indices are developed to test the accuracy of the semi-analytical method. The local indices provide insight into the problem of large errors for the semi-analytical method. Local error magnification indices are developed for beam and plane truss structures, and several examples showing the severity of the problem are presented. The global index provides us with a general method for checking the accuracy of the semi-analytical method for any type of model. It characterizes the difference in errors between a general finite-difference method and the semi-analytical method. Moreover, a method improving the accuracy of the semi-analytical method (when possible) is provided. Examples are presented showing the use of the global index.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Kobayashi, Izumi. "Sensitivity analysis of the topology of classification trees." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA372965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kafali, Pinar. "Evaluation Of Sensitivity Of Metu Gait Analysis System." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608390/index.pdf.

Full text
Abstract:
Gait analysis is one of the primary applications of biomechanics and deals with scientific description of human locomotion, which is a qualitative concept as observed through the human eye. METU Gait Analysis Laboratory has been operating in various fields of gait and motion analyses since 1999. Although several studies have previously been undertaken about METU Gait Analysis System, until now, the effects of methodology and protocol related system parameters on kinematic analysis results have not been fully and exhaustively investigated. This thesis presents an assessment on sensitivity and compatibility of METU Gait Analysis Protocol to variations in experimental methodology and implementation of various joint center estimation methods, performed through investigation of the resulting joint kinematics. It is believed that the performance and reliability of METU Gait Analysis System will be improved based on the findings of this study.
APA, Harvard, Vancouver, ISO, and other styles
50

Ezertas, Ahmet Alper. "Sensitivity Analysis Using Finite Difference And Analytical Jacobians." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611067/index.pdf.

Full text
Abstract:
The Flux Jacobian matrices, the elements of which are the derivatives of the flux vectors with respect to the flow variables, are needed to be evaluated in implicit flow solutions and in analytical sensitivity analyzing methods. The main motivation behind this thesis study is to explore the accuracy of the numerically evaluated flux Jacobian matrices and the effects of the errors in those matrices on the convergence of the flow solver, on the accuracy of the sensitivities and on the performance of the design optimization cycle. To perform these objectives a flow solver, which uses exact Newton&rsquo
s method with direct sparse matrix solution technique, is developed for the Euler flow equations. Flux Jacobian is evaluated both numerically and analytically for different upwind flux discretization schemes with second order MUSCL face interpolation. Numerical flux Jacobian matrices that are derived with wide range of finite difference perturbation magnitudes were compared with analytically derived ones and the optimum perturbation magnitude, which minimizes the error in the numerical evaluation, is searched. The factors that impede the accuracy are analyzed and a simple formulation for optimum perturbation magnitude is derived. The sensitivity derivatives are evaluated by direct-differentiation method with discrete approach. The reuse of the LU factors of the flux Jacobian that are evaluated in the flow solution enabled efficient sensitivity analysis. The sensitivities calculated by the analytical Jacobian are compared with the ones that are calculated by numerically evaluated Jacobian matrices. Both internal and external flow problems with varying flow speeds, varying grid types and sizes are solved with different discretization schemes. In these problems, when the optimum perturbation magnitude is used for numerical Jacobian evaluation, the errors in Jacobian matrix and the sensitivities are minimized. Finally, the effect of the accuracy of the sensitivities on the design optimization cycle is analyzed for an inverse airfoil design performed with least squares minimization.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography