To see the other types of publications on this topic, follow the link: Global response sensitivity analysis.

Dissertations / Theses on the topic 'Global response sensitivity analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Global response sensitivity analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Khalid, Adeel S. "Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design Optimization." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14013.

Full text
Abstract:
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
APA, Harvard, Vancouver, ISO, and other styles
2

Adetula, Bolade Adewale. "Global sensitivity analysis of reactor parameters / Bolade Adewale Adetula." Thesis, North-West University, 2011. http://hdl.handle.net/10394/5561.

Full text
Abstract:
Calculations of reactor parameters of interest (such as neutron multiplication factors, decay heat, reaction rates, etc.), are often based on models which are dependent on groupwise neutron cross sections. The uncertainties associated with these neutron cross sections are propagated to the final result of the calculated reactor parameters. There is a need to characterize this uncertainty and to be able to apportion the uncertainty in a calculated reactor parameter to the different sources of uncertainty in the groupwise neutron cross sections, this procedure is known as sensitivity analysis. The focus of this study is the application of a modified global sensitivity analysis technique to calculations of reactor parameters that are dependent on groupwise neutron cross–sections. Sensitivity analysis can help in identifying the important neutron cross sections for a particular model, and also helps in establishing best–estimate optimized nuclear reactor physics models with reduced uncertainties. In this study, our approach to sensitivity analysis will be similar to the variance–based global sensitivity analysis technique, which is robust, has a wide range of applicability and provides accurate sensitivity information for most models. However, this technique requires input variables to be mutually independent. A modification to this technique, that allows one to deal with input variables that are block–wise correlated and normally distributed, is presented. The implementation of the modified technique involves the calculation of multi–dimensional integrals, which can be prohibitively expensive to compute. Numerical techniques specifically suited to the evaluation of multidimensional integrals namely Monte Carlo, quasi–Monte Carlo and sparse grids methods are used, and their efficiency is compared. The modified technique is illustrated and tested on a two–group cross–section dependent problem. In all the cases considered, the results obtained with sparse grids achieved much better accuracy, while using a significantly smaller number of samples.
Thesis (M.Sc. Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
3

Koneshwaran, Sivalingam. "Blast response and sensitivity analysis of segmental tunnel." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/78619/1/Sivalingam_Koneshwaran_Thesis.pdf.

Full text
Abstract:
This research treated the response of underground transportation tunnels to surface blast loads using advanced computer simulation techniques. The influences of important parameters, such as tunnel material, geometrical configuration of segments and surrounding soil were investigated. The findings of this research offer significant new information on the blast performance of underground tunnels and will contribute towards future civil engineering applications.
APA, Harvard, Vancouver, ISO, and other styles
4

Dell'Oca, Aronne, Monica Riva, and Alberto Guadagnini. "Moment-based metrics for global sensitivity analysis of hydrological systems." COPERNICUS GESELLSCHAFT MBH, 2017. http://hdl.handle.net/10150/626437.

Full text
Abstract:
We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
APA, Harvard, Vancouver, ISO, and other styles
5

Chai, Wenqi. "Global sensitivity analysis on vibro-acoustic composite materials with parametric dependency." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC037/document.

Full text
Abstract:
Avec le développement rapide des modèles mathématiques et des outils de simulation, le besoin des processus de quantification des incertitudes a été bien augmenté. L'incertitude paramétrique et la groupe des nombreux décisions sont aujourd’hui les deux barrières principales dans la résolution des grandes problèmes systématiques.Capable de proportionner l'incertitude de la sortie sur celle des entrées, l’Analyse de Sensibilité Globale (GSA) est une solution fiable pour la quantification de l’incertitude. Parmi plusieurs algorithmes de GSA, Fourier Amplitude Sensitivity Analysis (FAST) est l’un des choix les plus populaires des chercheurs. Basé sur ANOVA-HDMR (ANalysis Of VAriance - High Dimensional Model Representation), il est solide en mathématique est efficace en calcul.Malheureusement, la décomposition unique d’ANOVA-HDMR se dépend sur l’indépendance des entrées. À cause de cela, il y a pas mal de cas industriels qui ne peut pas se traiter par FAST, particulièrement pour ceux qui donnent uniquement les échantillons mais sans lois de distribution. Sous cette demande, deux méthode extensifs de FAST avec design de corrélation sont proposées et étudiées dans la recherche. Parmi les deux méthodes, FAST-c s’est basé sur les distributions et FAST-orig s’est basé sur les échantillons.Comme applications et validations, multiples problèmes vibroacoustiques se sont traités dans la recherche. Les matériaux acoustiques avec soustructures, sont des candidats parfaits pour tester FAST-c et FAST-orig. Deux application sont présentées dans la première partie de la thèse, après l’état de l’arts. Les modèles choisis sont matérial poroélastique et structures composite sandwich, dont les propriétés mécaniques sont tous fortement influencées par les paramètres géométriques microscopique ou mesoscopique. D’avoir la méthode de FAST originale comparée avec les deux nouvelles, on trouve bien plus d’information sur la performance vibroacoustique de ces matériaux.Déjà répondu à la demande de GSA sur les modèles avecs les variables dépendantes, la deuxième partie de la thèse contient plus de recherches reliées avec FAST. D’abord FAST est pris en comparaison avec Random Forest, une algorithme bien connu de data-mining. Leurs erreurs potentiels et la possibilité de fonctioner ensemble sont discutés. Et dans les chapitres suivies, plus d’application de FAST sont présentées. Les méthodes sont appliquées sous plusieurs différente conditions. Une modèle de structure périodique qui contient des corrélation parmi les unités nous a en plus forcé à développer une nouvelle FAST-pe méthode. Dans ces applications, les designs des processus préliminaires et les stratégies d’échantillonages sont des essences à présenter
With rapid development of mathematical models and simulation tools, the need of uncertainty quantification process has grown higher than ever before. Parametric uncertainties and overall decision stacks are nowadays the two main barriers in solving large scale systematic problem.Global Sensitivity Analysis (GSA) is one reliable solution for uncertainty quantification which is capable to assess the uncertainty of model output on its inputs’. Among several GSA algorithms, Fourier Amplitude Sensitivity Test (FAST) is one of the most popular choices of researchers. Based on ANOVA-HDMR (ANalysis Of VAriance - High Dimensional Model Representation), it is both mathematically solid and computationally efficient.One unfortunate fact is that the uniqueness of ANOVA-HDMR relies on the independency of input variables. It makes FAST unable to treat many industrial cases especially for those with only datasets but not distribution functions to be found. To answer the needs, two extended FAST methods with correlation design are proposed and further studied in this research. Among them FAST-c is distribution-based and FAST-orig is data-based.As a frame of validation and application, a number of vibroacoustic problems are dealt with in this research. Vibroacoustic materials with substructures, are perfect test candidates for FAST-c and FAST-orig. Two application cases are presented in the first part of this thesis, following the literature review. The models chosen here are poroelastic material and sandwich composite structures, both having their mechanical properties hugely influenced by their microscopic and mesoscopic geometric parameters. Getting the original FAST method compared to the two with correlation design, many different features on materials’ vibroacoustic performance are latter discovered.Having got an answer for GSA on models with dependent variables, the second part of this thesis contains more extended researches related to FAST. It is taken into comparison with Random Forest, a well-known data-mining algorithm. The potential error of both algorithms are analyzed and the possibility of joint application is discussed. In the following chapters, more applications of FAST-series methods are reported. They are applied under various conditions where another improved version named FAST-pe is developed to treat a model of periodic structures with correlation among each units. Upon these FAST application cases, the design of preliminary process and the sampling strategies is the core part to be introduced
APA, Harvard, Vancouver, ISO, and other styles
6

Eldred, Lloyd B. "Sensitivity analysis of the static aeroelastic response of a wing." Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40147.

Full text
Abstract:
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value~ a bilinearly interpolated value, or a biquadratic ally interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used, sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Bergen, Frederick D'Oench Jr. "Shape sensitivity analysis of flutter response of a laminated wing." Thesis, Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/50074.

Full text
Abstract:
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates’ modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict flutter speed, frequency , and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis. However, it is recommended that higher order terms be used in the calculations in order to assure greater accuracy.
Master of Science
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
8

Bebamzadeh, Armin. "Efficient finite element response sensitivity analysis and applications in composites manufacturing." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/7801.

Full text
Abstract:
This thesis presents the development, implementation, and application of response sensitivities in numerical simulation of composite manufacturing. The sensitivity results include both first- and second-order derivatives. Such results are useful in many applications. In this thesis, they are applied in reliability analysis, optimization analysis, model validation, model calibration, as well as stand-alone measures of parameter importance to gain physical insight into the curing and stress development process. In addition to novel derivations and implementations, this thesis is intended to facilitate and foster increased use of response sensitivities in engineering analysis. The work presented in this thesis constitutes an extension of the direct differentiation method (DDM). This is a method that produces response sensitivities in an efficient and accurate manner, at the one-time cost of deriving and implementing sensitivity equations alongside the ordinary response algorithm. In this thesis, novel extensions of the methodology are presented for the composite manufacturing problem. The derivations include all material, geometry, and processing parameters in both the thermochemical and the stress development algorithms. A state-of-the-art simulation software is developed to perform first-order sensitivity analysis for composite manufacturing problems using the DDM. In this software, several novel techniques are employed to minimize the computational cost associated with the response sensitivity computations. This sensitivity-enabled software is also used in reliability, optimization, and model calibration applications. All these applications are facilitated by the availability of efficient and accurate response sensitivities. The derivation and implementation of second-order sensitivity equations is a particular novelty in this thesis. It is demonstrated that it is computationally feasible to obtain second-order sensitivities (the “Hessian matrix”) by the DDM for inelastic finite element problems. It is demonstrated that the direct differentiation approach to the computation of first- and second-order response sensitivities becomes increasingly efficient as the problem size increases, compared with the less accurate and less efficient finite different approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Kent, Edward Lander. "Sensitivity analysis of biochemical systems using high-throughput computing." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/sensitivity-analysis-of-biochemical-systems-using-highthroughput-computing(80eb7aa9-d316-4a72-a6c2-731c6052ea84).html.

Full text
Abstract:
Mathematical modelling is playing an increasingly important role in helping us to understand biological systems. The construction of biological models typically requires the use of experimentally-measured parameter values. However, varying degrees of uncertainty surround virtually all parameters in these models. Sensitivity analysis is one of the most important tools for the analysis of models, and shows how the outputs of a model, such as concentrations and reaction fluxes, are dependent on the parameters which make up the input. Unfortunately, small changes in parameter values can lead to the results of a sensitivity analysis changing significantly. The results of such analyses must therefore be interpreted with caution, particularly if a high degree of uncertainty surrounds the parameter values. Global sensitivity analysis methods can help in such situations by allowing sensitivities to be calculated over a range of possible parameter values. However, these techniques are computationally expensive, particularly for larger, more detailed models. Software was developed to enable a number of computationally-intensive modelling tasks, including two global sensitivity analysis methods, to be run in parallel in a high-throughput computing environment. The use of high-throughput computing enabled the run time of these analyses to be drastically reduced, allowing models to be analysed to a degree that would otherwise be impractical or impossible. Global sensitivity analysis using high-throughput computing was performed on a selection of both theoretical and physiologically-based models. Varying degrees of parameter uncertainty were considered. These analyses revealed instances in which the results of a sensitivity analysis were valid, even under large degrees of parameter variation. Other cases were found for which only a slight change in parameter values could completely change the results of the analysis. Parameter uncertainties are a real problem in biological systems modelling. This work shows how, with the help of high-throughput computing, global sensitivity analysis can become a practical part of the modelling process.
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Daorui. "Global Sensitivity of Water Quality Modeling in the Gulf of Finland." Thesis, KTH, Mark- och vattenteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180285.

Full text
Abstract:
The Gulf of Finland is the most eutrophied water body in the Baltic Sea, which is mainly caused by nutrient loads produced by human activities in its surrounding cities. In order to solve this environmental problem, a computational model based on the understanding the relations between eutrophication, water quality and sediments is needed to forecast the water quality variance in response to the natural and anthropogenic influences. A precise water quality model can be useful to assist the policy making in the Gulf of Finland, and even for the whole Baltic Sea. Kiirikki model, as one of these models describing the water quality of Baltic Sea in response of water quality variance, is a sediment and ecosystem based model, treating different sub-basins and layers as boxes. This study aims to assess the parameters’ sensitivity level on the scale of the Gulf of Finland. Firstly, the Morris sampling strategy is applied to generate economic OAT (One factor At a Time) samples before screening 50 out of 100 trajectories with distance as large as possible. In order to assess their sensitivity, index and indicator are needed. EE (elementary effect) is adopted to be the assessment index and four core eutrophication indicators from HELCOM 2009a are to be analyzed. By comparing the (σ,μ) and (σ,μ*) plots of each parameters’ EE values (σ is standard deviation, μ is mean value and μ* is the absolute mean value), some parameters are identified as potential sensitive parameter, such as the minimum biomass of cyanobacteria (Cmin), critical point of CO2 flux (CCr), the optimal temperature for detritus phosphorous mineralization (Toptgamma), maximum loss rate of algae (RAmax), optimal temperature for the growth of other algae (ToptmuA), Coefficient for temperature limiting factor for the growth of cyanobacteria (aTmuC), half-saturation coefficient of radiation for cyanobacteria (KIC) and so on. In contrast, the other parameters are ruled out as having very low values in terms of σ, μ and μ*. This is because the feature of Morris sampling strategy makes it easier to achieve high variance of the outputs, resulting into generally higher σ. Therefore, further investigation with different strategies is needed after the initial screening of the non-sensitive parameters in this study.
APA, Harvard, Vancouver, ISO, and other styles
11

Heredia, Guzman Maria Belen. "Contributions to the calibration and global sensitivity analysis of snow avalanche numerical models." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU028.

Full text
Abstract:
Une avalanche de neige est un danger naturel défini comme une masse de neige en mouvement rapide. Depuis les années 30, scientifiques conçoivent des modèles d'avalanche de neige pour décrire ce phénomène. Cependant, ces modèles dépendent de certains paramètres d'entrée mal connus qui ne peuvent pas être mesurés. Pour mieux comprendre les paramètres d'entrée du modèle et les sorties du modèle, les objectifs de cette thèse sont (i) de proposer un cadre pour calibrer les paramètres d'entrée et (ii) de développer des méthodes pour classer les paramètres d'entrée en fonction de leur importance dans le modèle en tenant compte la nature fonctionnelle des sorties. Dans ce cadre, nous développons des méthodes statistiques basées sur l'inférence bayésienne et les analyses de sensibilité globale. Nos développements sont illustrés sur des cas de test et des données réelles des avalanches de neige.D'abord, nous proposons une méthode d'inférence bayésienne pour récupérer la distribution des paramètres d'entrée à partir de séries chronologiques de vitesse d'avalanche ayant été collectées sur des sites de test expérimentaux. Nos résultats montrent qu'il est important d'inclure la structure d'erreur (dans notre cas l'autocorrélation) dans la modélisation statistique afin d'éviter les biais dans l'estimation des paramètres de frottement.Deuxièmement, pour identifier les paramètres d'entrée importants, nous développons deux méthodes basées sur des mesures de sensibilité basées sur la variance. Pour la première méthode, nous supposons que nous avons un échantillon de données et nous voulons estimer les mesures de sensibilité avec cet échantillon. Dans ce but, nous développons une procédure d'estimation non paramétrique basée sur l'estimateur de Nadaraya-Watson pour estimer les indices agrégés de Sobol. Pour la deuxième méthode, nous considérons le cadre où l'échantillon est obtenu à partir de règles d'acceptation/rejet correspondant à des contraintes physiques. L'ensemble des paramètres d'entrée devient dépendant du fait de l'échantillonnage d'acceptation-rejet, nous proposons donc d'estimer les effets de Shapley agrégés (extension des effets de Shapley à des sorties multivariées ou fonctionnelles). Nous proposons également un algorithme pour construire des intervalles de confiance bootstrap. Pour l'application du modèle d'avalanche de neige, nous considérons différents scénarios d'incertitude pour modéliser les paramètres d'entrée. Dans nos scénarios, la position et le volume de départ de l'avalanche sont les entrées les plus importantes.Nos contributions peuvent aider les spécialistes des avalanches à (i) prendre en compte la structure d'erreur dans la calibration du modèle et (ii) proposer un classementdes paramètres d'entrée en fonction de leur importance dans les modèles en utilisant des approches statistiques
Snow avalanche is a natural hazard defined as a snow mass in fast motion. Since the thirties, scientists have been designing snow avalanche models to describe snow avalanches. However, these models depend on some poorly known input parameters that cannot be measured. To understand better model input parameters and model outputs, the aims of this thesis are (i) to propose a framework to calibrate input parameters and (ii) to develop methods to rank input parameters according to their importance in the model taking into account the functional nature of outputs. Within these two purposes, we develop statistical methods based on Bayesian inference and global sensitivity analyses. All the developments are illustrated on test cases and real snow avalanche data.First, we propose a Bayesian inference method to retrieve input parameter distribution from avalanche velocity time series having been collected on experimental test sites. Our results show that it is important to include the error structure (in our case the autocorrelation) in the statistical modeling in order to avoid bias for the estimation of friction parameters.Second, to identify important input parameters, we develop two methods based on variance based measures. For the first method, we suppose that we have a given data sample and we want to estimate sensitivity measures with this sample. Within this purpose, we develop a nonparametric estimation procedure based on the Nadaraya-Watson kernel smoother to estimate aggregated Sobol' indices. For the second method, we consider the setting where the sample is obtained from acceptance/rejection rules corresponding to physical constraints. The set of input parameters become dependent due to the acceptance-rejection sampling, thus we propose to estimate aggregated Shapley effects (extension of Shapley effects to multivariate or functional outputs). We also propose an algorithm to construct bootstrap confidence intervals. For the snow avalanche model application, we consider different uncertainty scenarios to model the input parameters. Under our scenarios, the release avalanche position and volume are the most crucial inputs.Our contributions should help avalanche scientists to (i) account for the error structure in model calibration and (ii) rankinput parameters according to their importance in the models using statistical methods
APA, Harvard, Vancouver, ISO, and other styles
12

Ziehn, Tilo. "Development and application of global sensitivity analysis methods in environmental and safety engineering." Thesis, University of Leeds, 2008. http://etheses.whiterose.ac.uk/5848/.

Full text
Abstract:
As computing power increases and data relating to elementary chemical and physical processes improves, the use of computational modelling as a design tool in environmental and safety engineering is becoming increasingly important. Sensitivity analysis (SA) and uncertainty analysis (UA) can help to gain a better physical insight into the model and they can highlight possible discrepancies between experimental results and model predictions. Global methods are the best approach for this purpose, however using current methodologies their calculation consumes large amounts of computational effort.
APA, Harvard, Vancouver, ISO, and other styles
13

Hameed, Maysoun Ayad. "Evaluating Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2398.

Full text
Abstract:
Global Sensitivity Analysis (GSA) approach helps to identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. The effects of 14 parameters and one input (forcing data) of the Sacramento Soil Moisture Accounting (SAC-SMA) model are analyzed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. The main parameter sensitivities (first-order) and the interactions sensitivities (second-order) are evaluated in this study. Our results show that some hydrological processes are highly affected by the simulation length. In other words, some parameters reveal importance during the short period simulation (e.g. one-year) while other parameters are effective in the long period simulations (e.g. four-year and seven-year). Moreover, the reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show that the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. This study confirms that the Sobol' and FAST methods are reliable GSA methods that can be applied in different scientific applications. Finally, as a future work, we suggest to study the uncertainty associated with the sensitivity analysis approach regarding the reliability of evaluating different sensitivity analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
14

Lashgari, Iman. "Global stability analysis of complex fluids." Licentiate thesis, KTH, Mekanik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-139405.

Full text
Abstract:
The main focus of this work is on the non-Newtonian effects on the inertial instabilities in shear flows. Both inelastic (Carreau) and elastic models (Oldroyd-B and FENE-P) have been employed to examine the main features of the non-Newtonian fluids; shear-thinning, shear-thickening and elasticity. Several classical configurations have been considered; flow past a circular cylinder, in a lid-driven cavity and in a channel. We have used a wide range of tools for linear stability analysis, modal, non-modal, energy and sensitivity analysis, to determine the instability mechanisms of the non-Newtonian flows and compare them with those of the Newtonian flows. Direct numerical simulations have been also used to prove the results obtained by the linear stability analysis. Significant modifications/alterations in the instability of the different flows have been observed under the action of the non-Newtonian effects. In general, shear-thinning/shear-thickening effects destabilize/stabilize the flow around the cylinder and in a lid driven cavity. Viscoelastic effects both stabilize and destabilize the channel flow depending on the ratio between the viscoelastic and flow time scales. The instability mechanism is just slightly modified in the cylinder flow whereas new instability mechanisms arise in the lid-driven cavity flow. We observe that the non-Newtonian effect can alter the inertial flow at both baseflow and perturbation level (e.g. Carreau fluid past a cylinder or in a lid driven cavity) or it may just affect the perturbations (e.g. Oldroyd-B fluid in channel). In all the flow cases studied, the modifications in the instability dynamics are shown to be strongly connected to the contribution of the different terms in the perturbation kinetic energy budget.

QC 20140113

APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Mengchao. "Sensitivity analysis and evolutionary optimization for building design." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/16282.

Full text
Abstract:
In order to achieve global carbon reduction targets, buildings must be designed to be energy efficient. Building performance simulation methods, together with sensitivity analysis and evolutionary optimization methods, can be used to generate design solution and performance information that can be used in identifying energy and cost efficient design solutions. Sensitivity analysis is used to identify the design variables that have the greatest impacts on the design objectives and constraints. Multi-objective evolutionary optimization is used to find a Pareto set of design solutions that optimize the conflicting design objectives while satisfying the design constraints; building design being an inherently multi-objective process. For instance, there is commonly a desire to minimise both the building energy demand and capital cost while maintaining thermal comfort. Sensitivity analysis has previously been coupled with a model-based optimization in order to reduce the computational effort of running a robust optimization and in order to provide an insight into the solution sensitivities in the neighbourhood of each optimum solution. However, there has been little research conducted to explore the extent to which the solutions found from a building design optimization can be used for a global or local sensitivity analysis, or the extent to which the local sensitivities differ from the global sensitivities. It has also been common for the sensitivity analysis to be conducted using continuous variables, whereas building optimization problems are more typically formulated using a mixture of discretized-continuous variables (with physical meaning) and categorical variables (without physical meaning). This thesis investigates three main questions; the form of global sensitivity analysis most appropriate for use with problems having mixed discretised-continuous and categorical variables; the extent to which samples taken from an optimization run can be used in a global sensitivity analysis, the optimization process causing these solutions to be biased; and the extent to which global and local sensitivities are different. The experiments conducted in this research are based on the mid-floor of a commercial office building having 5 zones, and which is located in Birmingham, UK. The optimization and sensitivity analysis problems are formulated with 16 design variables, including orientation, heating and cooling setpoints, window-to-wall ratios, start and stop time, and construction types. The design objectives are the minimisation of both energy demand and capital cost, with solution infeasibility being a function of occupant thermal comfort. It is concluded that a robust global sensitivity analysis can be achieved using stepwise regression with the use of bidirectional elimination, rank transformation of the variables and BIC (Bayesian information criterion). It is concluded that, when the optimization is based on a genetic algorithm, that solutions taken from the start of the optimization process can be reliably used in a global sensitivity analysis, and therefore, there is no need to generate a separate set of random samples for use in the sensitivity analysis. The extent to which the convergence of the variables during the optimization can be used as a proxy for the variable sensitivities has also been investigated. It is concluded that it is not possible to identify the relative importance of variables through the optimization, even though the most important variable exhibited fast and stable convergence. Finally, it is concluded that differences exist in the variable rankings resulting from the global and local sensitivity methods, although the top-ranked solutions from each approach tend to be the same. It also concluded that the sensitivity of the objectives and constraints to all variables is obtainable through a local sensitivity analysis, but that a global sensitivity analysis is only likely to identify the most important variables. The repeatability of these conclusions has been investigated and confirmed by applying the methods to the example design problem with the building being located in four different climates (Birmingham, UK; San Francisco, US; and Chicago, US).
APA, Harvard, Vancouver, ISO, and other styles
16

Hallberg, Anna. "Antibiotic resistance and the global response : An analysis of political frames." Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-296683.

Full text
Abstract:
With regards to the potential severity of increased antibiotic resistance around the world it is puzzling that the global response to this issue has not been more comprehensive. In this thesis I will examine the political frames on ABR formulated by the global network ReAct in an attempt to understand why this is the case. The frames of an issue, that is how it is described politically in different ways, are crucial for agenda-setting. Moreover, framing is an important part of the work of transnational advocacy networks. Since the acknowledgement of an issue in terms of agenda-setting is an important part of a global response, the frames of transnational advocacy networks make up the focus of this thesis. My findings suggests that the existence of multiple frames on ABR to some extent helps us understand the lacking response to ABR. The construction of the frames in terms of causality, and inparticular a general vagueness in terms of responsibility, is however the main finding.
APA, Harvard, Vancouver, ISO, and other styles
17

Leung, Colin. "SENSITIVITY OF SEISMIC RESPONSE OF A 12 STORY REINFORCED CONCRETE BUILDING TO VARYING MATERIAL PROPERTIES." DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/681.

Full text
Abstract:
The main objective of this investigation is to examine how various material properties, governed by code specification, affect the seismic response of a twelve- story reinforced concrete building. This study incorporates the pushover and response history analysis to examine how varying steel yield strength (Fy), 28 day nominal compressive concrete strength (f’c), modes, and ground motions may affect the base shear capacity and displacements of a reinforced concrete structure. Different steel and concrete strengths were found to have minimal impact on the initial stiffness of the structure. However, during the post-yielding phase, higher steel and concrete compressive strengths resulted in larger base shear capacities of up to 22%. The base shear capacity geometric median increased as f’c or Fy increased, and the base shear capacity dispersion measure decreased as f’c or Fy increased. Higher mode results were neglected in this study due to non-convergent pushover analyses results. According to the response history analysis, larger yield and concrete compressive strengths result in lower roof displacement. The difference in roof displacement was less than 12% throughout. This displays the robustness of both analysis methods because material properties have insignificant impact on seismic response. Therefore, acceptable yield and compressive strengths governed by seismic code will result in acceptable building performance.
APA, Harvard, Vancouver, ISO, and other styles
18

Prando, Dario. "Global sensitivity analysis of the building energy performance and correlation assessment of the design parameters." Thesis, KTH, Byggnadsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-35044.

Full text
Abstract:
The world’s energy use in buildings (residential and commercial) accounts for around 40% of the worldwide energy consumption, and space heating is the responsible for half of the energy need in the building sector. In Europe, only a small share (less than 10%) of existing buildings was built after 1990. Most of the building stock does not satisfy the recent energy technical standards; in addition there is a very low trend to construct new buildings in the last years. Renovation of the existing buildings is a feasible option to reduce the energy need in Europe, but finding the optimum solutions for a renovation is not a simple task. Each design parameter differently influences the final energy need of buildings and, furthermore, the different variables are differently correlated each other. Building refurbishment will benefit from a tool for the selection of the best measures in term of energy need. This work, through a global sensitivity analysis, aims at determining the contribution of the design parameters to the building energy demand and the correlation between the different variables. The considered parameters are related to the improvement of the thermal transmittance of both the opaque envelope and the windows, the solar transmittance of the glazing surfaces, the window size, the thermal inertia of the internal walls and the external sunshades for windows. Several dynamic simulations have been performed varying the design parameters from different starting conditions. Finally, due to the large number of cases elaborated, an inferential statistical analysis has been performed in order to identify the predominant factors and the correlation between the design parameters in a global context.
APA, Harvard, Vancouver, ISO, and other styles
19

GIOIA, PAOLA. "Towards more accurate measures of global sensitivity analysis. Investigation of first and total order indices." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/45695.

Full text
Abstract:
A new technique for estimating variance–based total sensitivity indices from given data is developed. It is also develped a new approach for the estimation of the first order effects given a specific sample design. This method adopts the RBD approach published by Tarantola et al., (2007) for the computation of first order sensitivity indices in association to Quasi–Random numbers.
APA, Harvard, Vancouver, ISO, and other styles
20

Chandler, Gary James. "Sensitivity analysis of low-density jets and flames." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/246531.

Full text
Abstract:
This work represents the initial steps in a wider project that aims to map out the sensitive areas in fuel injectors and combustion chambers. Direct numerical simulation (DNS) using a Low-Mach-number formulation of the Navier–Stokes equations is used to calculate direct-linear and adjoint global modes for axisymmetric low-density jets and lifted jet diffusion flames. The adjoint global modes provide a map of the most sensitive locations to open-loop external forcing and heating. For the jet flows considered here, the most sensitive region is at the inlet of the domain. The sensitivity of the global-mode eigenvalues to force feedback and to heat and drag from a hot-wire is found using a general structural sensitivity framework. Force feedback can occur from a sensor-actuator in the flow or as a mechanism that drives global instability. For the lifted flames, the most sensitive areas lie between the inlet and flame base. In this region the jet is absolutely unstable, but the close proximity of the flame suppresses the global instability seen in the non-reacting case. The lifted flame is therefore particularly sensitive to outside disturbances in the non-reacting zone. The DNS results are compared to a local analysis. The most absolutely unstable region for all the flows considered is at the inlet, with the wavemaker slightly downstream of the inlet. For lifted flames, the region of largest sensitivity to force feedback is near to the location of the wavemaker, but for the non-reacting jet this region is downstream of the wavemaker and outside of the pocket of absolute instability near the inlet. Analysing the sensitivity of reacting and non-reacting variable-density shear flows using the low-Mach-number approximation has up until now not been done. By including reaction, a large forward step has been taken in applying these techniques to real fuel injectors.
APA, Harvard, Vancouver, ISO, and other styles
21

Gu, Quan. "Finite element response sensitivity and reliability analysis of Soil-Foundation-Structure-Interaction (SFSI) systems." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3290678.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed February 5, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 569-594).
APA, Harvard, Vancouver, ISO, and other styles
22

Weiße, Andrea Yeong [Verfasser]. "Global sensitivity analysis of ordinary differential equations : adaptive density propagation using approximate approximations / Andrea Yeong Weiße." Berlin : Freie Universität Berlin, 2009. http://d-nb.info/1023751046/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Masad, Sanaa Ahmad. "Sensitivity analysis of flexible pavement response and AASHTO 2002 design guide for properties of unbound layers." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/528.

Full text
Abstract:
Unbound granular materials are generally used in road pavements as base and subbase layers. The granular materials provide load distribution through aggregate contacts to a level that can help the subgrade to withstand the applied loads. Several research studies have shown that unbound pavement layers exhibit anisotropic properties. Anisotropy is caused by the preferred orientation of aggregates and compaction forces. The result is unbound pavement layers that have higher stiffness in the vertical direction than in the horizontal direction. This behavior is not accounted for in the design and analysis procedures included in the proposed AASHTO 2002 design guide. One of the objectives of this study is to conduct a comparative analysis of flexible pavement response using different models for unbound pavement layers: linear isotropic, nonlinear isotropic, linear anisotropic and nonlinear anisotropic. Pavement response is computed using a finite element program. The computations from nonlinear isotropic and anisotropic models of unbound layers are compared to the AASHO field experimental measurements. The second objective is to analyze the influence of using isotropic and anisotropic properties for the pavement layers on the performance of flexible pavements calculated using the AASHTO 2002 models. Finally, a comprehensive sensitivity analysis of the proposed AASHTO 2002 performance models to the properties of the unbound pavement layers is conducted. The sensitivity analysis includes different types of base materials, base layer thicknesses, hot mix asphalt type and thickness, environmental conditions, and subgrade materials.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Lu. "Computational Study of Turbulent Combustion Systems and Global Reactor Networks." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78804.

Full text
Abstract:
A numerical study of turbulent combustion systems was pursued to examine different computational modeling techniques, namely computational fluid dynamics (CFD) and chemical reactor network (CRN) methods. Both methods have been studied and analyzed as individual techniques as well as a coupled approach to pursue better understandings of the mechanisms and interactions between turbulent flow and mixing, ignition behavior and pollutant formation. A thorough analysis and comparison of both turbulence models and chemistry representation methods was executed and simulations were compared and validated with experimental works. An extensive study of turbulence modeling methods, and the optimization of modeling techniques including turbulence intensity and computational domain size have been conducted. The final CFD model has demonstrated good predictive performance for different turbulent bluff-body flames. The NOx formation and the effects of fuel mixtures indicated that the addition of hydrogen to the fuel and non-flammable diluents like CO2 and H2O contribute to the reduction of NOx. The second part of the study focused on developing chemical models and methods that include the detailed gaseous reaction mechanism of GRI-Mech 3.0 but cost less computational time. A new chemical reactor network has been created based on the CFD results of combustion characteristics and flow fields. The proposed CRN has been validated with the temperature and species emission for different bluff-body flames and has shown the capability of being applied to general bluff-body systems. Specifically, the rate of production of NOx and the sensitivity analysis based on the CRN results helped to summarize the reduced reaction mechanism, which not only provided a promising method to generate representative reactions from hundreds of species and reactions in gaseous mechanism but also presented valuable information of the combustion mechanisms and NOx formation. Finally, the proposed reduced reaction mechanism from the sensitivity analysis was applied to the CFD simulations, which created a fully coupled process between CFD and CRN, and the results from the reduced reaction mechanism have shown good predictions compared with the probability density function method.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Han. "Global Analysis and Structural Performance of the Tubed Mega Frame." Thesis, KTH, Betongbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-147382.

Full text
Abstract:
The Tubed Mega Frame is a new structure concept for high-rise buildings which is developed by Tyréns. In order to study the structural performance as well as the efficiency of this new concept, a global analysis of the Tubed Mega Frame structure is performed using finite element analysis software ETABS. Besides, the lateral loads that should be applied on the structure according to different codes are also studied. From the design code study for wind loads and seismic design response spectrums, it can be seen that the calculation philosophies are different from code to code. The wind loads are approximately the same while the design response spectrums vary a lot from different codes. In the ETABS program, a 3D finite element model is built and analyzed for linear static, geometric non-linearity (P-Delta) and linear dynamic cases. The results from the analysis in the given scope show that the Tubed Mega Frame structural system is potentially feasible and has relatively high lateral stiffness and global stability. For the service limit state, the maximum story drift ratio is within the limitation of 1/400 and the maximum story acceleration is 0.011m/sec 2 which fulfill the comfort criteria.
APA, Harvard, Vancouver, ISO, and other styles
26

Couture, Nicole J. "Sensitivity of permafrost terrain in a high Arctic polar desert : an evaluation of response to disturbance near Eureka, Ellesmere Island, Nunavut." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31213.

Full text
Abstract:
A first approximation of ground ice volume for the area surrounding Eureka, Nunavut, indicates that it comprises 30.8% of the upper 5.9 m of permafrost. Volume depends on the type of ice examined, ranging from 1.8 to 69.0% in different regions of the study area. Excess ice makes up 17.7% of the total volume of frozen materials in the study area. Melt of ground ice in the past has produced thermokarst features which include ground subsidence of up to 3.2 m, formation of tundra ponds, degradation of ice wedges, thaw slumps greater than 50 m across, gullying, and numerous active layer detachment slides. With a doubling of atmospheric carbon dioxide, the rise in mean annual temperatures for the area is projected to be 4.9 to 6.6°C, which would lengthen the thaw season and increase thaw depths by up to 70 cm. The expected geomorphic changes to the landscape are discussed.
APA, Harvard, Vancouver, ISO, and other styles
27

Lenci, Alessandro. "Multiphase flow in porous media: meta-modeling techniques for sensitivity analysis and risk assessment." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Glodic, Nenad. "Sensitivity of Aeroelastic Properties of an Oscillating LPT Cascade." Licentiate thesis, KTH, Kraft- och värmeteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123504.

Full text
Abstract:
Modern turbomachinery design is characterized by a tendency towards thinner, lighter and highly loaded blades, which in turn gives rise to increased sensitivity to flow induced vibration such as flutter. Flutter is a self-excited and self-sustained instability phenomenon that may lead to structural failure due to High Cycle Fatigue (HCF) or material overload. In order to be able to predict potential flutter situations, it is necessary to accurately assess the unsteady aerodynamics during flutter and to understand the physics behind its driving mechanisms. Current numerical tools used for predicting unsteady aerodynamics of vibrating turbomachinery components are capable of modeling the flow field at high level of detail, but may fail in predicting the correct unsteady aerodynamics under certain conditions. Continuous validation of numerical models against experimental data therefore plays significant role in improving the prediction accuracy and reliability of the models.   In flutter investigations, it is common to consider aerodynamically symmetric (tuned) setups. Due to manufacturing tolerances, assembly inaccuracies as well as in-service wear, the aerodynamic properties in a blade row may become asymmetric. Such asymmetries can be observed both in terms of steady as well as unsteady aerodynamic properties, and it is of great interest to understand the effects this may have on the aeroelastic stability of the system.   Under certain conditions vibratory modes of realistic blade profiles tend to be coupled i.e. the contents of a given mode of vibration include displacements perpendicular and parallel to the chord as well as torsion of the profile. Current design trends for compressor blades that are resulting in low aspect ratio blades potentially reduce the frequency spacing between certain modes (i.e. 2F & 1T). Combined modes are also likely to occur in case of the vibration of a bladed disk with a comparatively soft disk and rigid blades or due to tying blades together in sectors (e.g. in turbines).   The present investigation focuses on two areas that are of importance for improving the understanding of aeroelastic behavior of oscillating blade rows. Firstly, aeroelastic properties of combined mode shapes in an oscillating Low Pressure Turbine (LPT) cascade were studied and validity of the mode superposition principle was assessed. Secondly, the effects of aerodynamic mistuning on the aeroelastic properties of the cascade were addressed. The aerodynamic mistuning considered here is caused by blade-to-blade stagger angle variations   The work has been carried out as compound experimental and numerical investigation, where numerical results are validated against test data. On the experimental side a test facility comprising an annular sector of seven free-standing LPT blades is used. The aeroelastic response phenomena were studied in the influence coefficient domain where one of the blades is made to oscillate in three-dimensional pure or combined modes, while the unsteady blade surface pressure is acquired on the oscillating blade itself and on the non-oscillating neighbor blades. On the numerical side, a series of numerical simulations were carried out using a commercial CFD code on a full-scale time-marching 3D viscous model. In accordance with the experimental part the simulations are performed using the influence coefficient approach, with only one blade oscillating.   The results of combined modes studies suggest the validity of combining the aeroelastic properties of two modes over the investigated range of operating parameters. Quality parameters, indicating differences in mean absolute and imaginary values of the unsteady response between combined mode data and superposed data, feature values that are well below measurement accuracy of the setup.   The findings of aerodynamic mistuning investigations indicate that the effect of de-staggering a single blade on steady aerodynamics in the cascade seem to be predominantly an effect of the change in passage throat. The changes in steady aerodynamics are thereby observed on the unsteady aerodynamics where distinctive effects on flow velocity lead to changes in the local unsteady pressure coefficients. In order to assess the overall aeroelastic stability of a randomly mistuned blade row, a Reduced Order Model (ROM) model is introduced, allowing for probabilistic analyses. From the analyses, an effect of destabilization due to aero-asymmetries was observed. However the observed effect was of moderate magnitude.

QC 20130610


Turbokraft
APA, Harvard, Vancouver, ISO, and other styles
29

Lester, Alanna Paige. "An Examination of Site Response in Columbia, South Carolina: Sensitivity of Site Response to "Rock" Input Motion and the Utility of Vs(30)." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33467.

Full text
Abstract:
This study examines the sensitivity of calculated site response in connection with alternative assumptions regarding input motions and procedures prescribed in the IBC 2000 building code, particularly the use of average shear wave velocity in the upper 30 meters as an index for engineering design response spectra. Site specific subsurface models are developed for four sites in and near Columbia, South Carolina using shear wave velocity measurements from cone penetrometer tests. The four sites are underlain by thin coastal plain sedimentary deposits, overlying high velocity Paleozoic crystalline rock. An equivalent-linear algorithm is used to estimate site response for vertically incident shear waves in a horizontally layered Earth model. Non-linear mechanical behavior of the soils is analyzed using previously published strain-dependent shear modulus and damping degradation models. Two models for material beneath the investigated near-surface deposits are used: B-C outcrop conditions and hard rock outcrop conditions. The rock outcrop model is considered a geologically realistic model where a velocity gradient, representing a transition zone of partially weathered rock and fractured rock, overlies a rock half-space. Synthetic earthquake input motions are generated using the deaggregations from the 2002 National Seismic Hazard Maps, representing the characteristic Charleston source. The U. S. Geological Survey (2002) uniform hazard spectra are used to develop 2% in 50 year probability of exceedance input ground motions for both B-C boundary and hard rock outcrop conditions. An initial analysis was made for all sites using an 8 meter thick velocity gradient for the rock input model. Sensitivity of the models to uncertainty of the weathered zone thickness was assessed by randomizing the thickness of the velocity gradient. The effect of the velocity gradient representing the weathered rock zone increases site response at high frequencies. Both models (B-C outcrop conditions and rock outcrop conditions) are compared with the International Building Code (IBC 2000) maximum credible earthquake spectra. The results for both models exceed the IBC 2000 spectra at some frequencies, between 3 and 10 Hz at all four sites. However, site 2, which classifies as a C site and is therefore assumed to be the most competent of the four sites according to IBC 2000 design procedures, has the highest calculated spectral acceleration of the four sites analyzed. Site 2 has the highest response because a low velocity zone exists at the bottom of the geotechnical profile in immediate contact with the higher velocity rock material, producing a very large impedance contrast. An important shortcoming of the IBC 2000 building code results from the fact that it does not account for cases in which there is a strong rock-soil velocity contrast at depth less than 30 meters. It is suggested that other site-specific parameters, specifically, depth to bedrock and near-surface impedance ratio, should be included in the IBC design procedures.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
30

Dong, Siyi. "Robustness analysis of VEGA launcher model based on effective sampling strategy." Thesis, University of Exeter, 2016. http://hdl.handle.net/10871/30469.

Full text
Abstract:
An efficient robustness analysis for the VEGA launch vehicle is essential to minimize the potential system failure during the ascending phase. Monte Carlo sampling method is usually considered as a reliable strategy in industry if the sampling size is large enough. However, due to a large number of uncertainties and a long response time for a single simulation, exploring the entire uncertainties sufficiently through Monte Carlo sampling method is impractical for VEGA launch vehicle. In order to make the robustness analysis more efficient when the number of simulation is limited, the quasi-Monte Carlo(Sobol, Faure, Halton sequence) and heuristic algorithm(Differential Evolution) are proposed. Nevertheless, the reasonable number of samples for simulation is still much smaller than the minimal number of samples for sufficient exploration. To further improve the efficiency of robustness analysis, the redundant uncertainties are sorted out by sensitivity analysis. Only the dominant uncertainties are remained in the robustness analysis. As all samples for simulation are discrete, many uncertainty spaces are not explored with respect to its objective function by sampling or optimization methods. To study these latent information, the meta-model trained by Gaussian Process is introduced. Based on the meta-model, the expected maximum objective value and expected sensitivity of each uncertainties can be analyzed for robustness analysis with much higher efficiency but without loss much accuracy.
APA, Harvard, Vancouver, ISO, and other styles
31

Eriksson, Olle. "Sensitivity and Uncertainty Analysis Methods : with Applications to a Road Traffic Emission Model." Doctoral thesis, Linköping : Linköpings universitet, Deparment of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Herring, Nathan Daniel. "Sensitivity Analysis of the Forest Vegetation Simulator Southern Variant (FVS-Sn)for Southern Appalachian Hardwoods." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/34167.

Full text
Abstract:
The FVS-Sn model was developed by the USDA Forest Service to project and report forest growth and yield predictions for the Southern United States. It is able to project forest growth and yield for different forest types and management prescriptions, but it is a relatively new, complex, and untested model. These limitations notwithstanding, FVS-Sn once tested and validated could meet the critical need of a comprehensive growth and yield model for the mixed hardwood forests of the southern Appalachian region. In this study, sensitivity analyses were performed on the FVS-Sn model using Latin hypercube sampling. Response surfaces were fitted to determine the magnitudes and directions of relationships between FVS-Sn model parameters and predicted 10-year basal area increment. Model sensitivities were calculated for five different test scenarios for both uncorrelated and correlated FVS-Sn input parameters and sub-models. Predicted 10-year basal area increment was most sensitive to parameters and sub-models related to the stand density index and, to a lesser degree, the large tree diameter growth sub-model. The testing procedures and framework developed in this study will serve as a template for further evaluation of FVS-Sn, including a comprehensive assessment of model uncertainties, followed by a recalibration for southern Appalachian mixed hardwood forests.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Baharudin, Mohamad Emran. "Modelling the structural response of reinforced concrete slabs exposed to fire : validation, sensitivity, and consequences for analysis and design." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31251.

Full text
Abstract:
Structural fire design represents one important aspect of the design of reinforced concrete buildings. The work presented in this thesis seeks to elucidate the structural behaviour of reinforced concrete slabs during exposure to heating from below, as would occur in the case of a building fire, with a particular focus on structural fire modelling using finite element analysis. The focus in on validating finite element models against experimental results and quantifying the sensitivity of model outputs to relevant thermal and mechanical input parameters. A primary goal of the work is to provide recommendations to structural fire engineering analysts and designers considering the performance-based design of reinforced concrete slabs for structural fire resistance using available finite element software. A critical review of the available knowledge of the structural fire response of reinforced concrete structures in general and concrete slabs in particular is presented, along with an awareness as to the importance of understanding structural response of concrete structures exposed to fires. Current techniques for structural fire design of concrete structures are reviewed, and shortcomings highlighted. Available experimental data are presented, and various finite element models of these slabs are developed and interrogated to identify important aspects for understanding, as well as for future improvement of similar studies (both experimental and numerical) with the intention of supporting future progress in structural fire engineering, in particular as regards performance based structural fire design of concrete slabs. A range of thermal and mechanical parameters that are potentially important and influential in the structural fire design of reinforced concrete slabs is then studied, including: fire scenario, thermal properties of materials (thermal conductivity and specific heat), heat transfer parameters (coefficient of convection and emissivity) and assumptions, restraint conditions at the supports, variations of span-to-depth ratio, reinforcement detailing, as well as plan aspect ratio are all investigated; their influence on the structural fire response of reinforced concrete slabs is studied and discussed. A key issue in validating finite element models against experimental results lies in defining the temperature inputs to the structural finite element models correctly. Variation of available thermal and mechanical input parameters, as recommended in Eurocodes, influences the predictive performance of thermal and structural finite element models, however these are not the main contributing factors in obtaining a credible prediction of response from the finite element models. The most challenging aspect in performing heat transfer analysis for fire furnace tested reinforced concrete slabs lies in defining the correct thermal boundary condition. For simply supported one-way spanning and two-way spanning slabs, increasing slab's thickness (lowering span-depth ratio) does not improve fire resistance rating for the slabs when both limiting deflection criteria and limiting tensile plastic strain are set as acceptance criteria. Two-way slabs with higher span-depth ratio have better fire resistance ratings, judging from the overall trends and magnitudes of mid-span deflections. The formation of plastic hinges is likely to occur for one-way spanning slabs modelled with finite rotational spring stiffness at supports, but not for two-way spanning slabs. A yield line mechanism in two-way slabs means that the behaviour is more complex as compared to the simple flexural mechanism for one-way slabs. In one-way slabs, plastic hinges potentially occur at the location where top reinforcement is curtailed, highlighting the importance of properly understanding the nuances in response of concrete slabs in fire. Investigation of the influence of aspect ratio in two-way spanning slabs confirms that slabs with lower aspect ratios have better structural fire resistance than slabs with higher aspect ratios when both limiting deflection criteria and limiting tensile strain in reinforcing steel were used as the performance indicators. A combination of both limiting mid-span deflection criteria as well as limiting tensile plastic strain is recommended for specifying acceptance criteria for both one-way and two-way slabs, since it gives more accurate and comprehensive assessment on the structural response of the slabs under exposure to severe heating from below.
APA, Harvard, Vancouver, ISO, and other styles
34

Kapp, Ashley. "An analysis of restructuring and work design used by manufacturing organisations in response to changing global forces." Thesis, Port Elizabeth Technikon, 2004. http://hdl.handle.net/10948/145.

Full text
Abstract:
Due to the continual increase in competitive pressure from international organisations, it has become necessary to assess the degree of transformational change within South African organisations to overcome the effect of global forces. Transformation was investigated in terms of organizational restructuring and the various work designs that are utilised by organisations to deal with the effect of global forces. To examine the main problem, three sub-problems were identified. The first sub-problem that had been identified dealt with the extent of which global forces impacted on the business environment. It was investigated by evaluating various economic, technological and sociopolitical forces. From the results it may be concluded that global forces have a large impact on the local business environment. The second sub-problem looked at the degree to which work designs assisted organisations to manage the effect of global forces. It was evident that the flexible types of work designs were more readily utilised to optimise productivity and employee moral. Finally, the third sub-problem investigated the various structures that organisations could adopt to deal with the effect of global forces. Organisational structures were analysed in terms of customer orientation, fulfilment of company objectives and the types of structures that are used within organisations. The results showed that 75% of the sample population believed that their organisational structures co-ordinated all activities within their organisations. Feedback on the type of structures that were used by organisations revealed that various types are being used.
APA, Harvard, Vancouver, ISO, and other styles
35

Ranchod, Yudhvir. "Caught in the web : an analysis of South Africa's response to the emerging global information policy regime." Master's thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/3719.

Full text
Abstract:
Includes bibliographical references (leaves 122-126).
This study provides a descriptive analysis of South Africa's response to the emerging global information policy regime. Compelled by a combination of hegemonic influences and its own self interest, this study argues that South Africa accepted the liberalising commitments of the emerging global information policy regime vis-a-vis the World Trade Organization Agreement on Basic Telecommunications. As a contribution to understanding inter-state cooperation in international relations, regime theory is utilised as the theoretical framework. The regime framework is used to explain the motivations behind South Africa's intention to liberalise its telecommunications sector as a result of power dynamics in the international system. The findings from the qualitative analysis note that South Africa's response is motivated by systemic and domestic factors. A willingness to enter the information economy and fulfil domestic social development means that South Africa has to balance its obligations to the WTO with the commitments to improve its domestic accessibility concerns. As a developing country with inadequate conditions for liberalisation, South Africa was unable to stop the strategic equity partners from capitalising on the poorly regulated telecommunications environment. The unfavourable result of high tariff charges and low fixed-line connectivity can be attributed to privatisation initiatives and lack of political will to promote competition. South Africa is in the midst of dramatic change in its telecommunications sector which is aided by technological convergence, further privatisation of the incumbent and the introduction of the Second Network Operator. The international scope of this study means that liberalisation is part of South Africa's broader commitments to the emerging global information policy regime. Entering the information economy is conditional on the successful implementation of international liberalisation policies so that the required investment and skills can assist in providing universal service to the majority of South Africa citizens. However, implementation requires a fair market structure, independent regulation and low interconnection charges. Without these important structures in place, this study notes that the goal of participation in the information economy and economic growth as a result of effective telecommunication utilisation is a distant reality.
APA, Harvard, Vancouver, ISO, and other styles
36

Waibel, Michael Scott. "Model Analysis of the Hydrologic Response to Climate Change in the Upper Deschutes Basin, Oregon." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/45.

Full text
Abstract:
Considerable interest lies in understanding the hydrologic response to climate change in the upper Deschutes Basin, particularly as it relates to groundwater fed streams. Much of the precipitation occurring in the recharge zone falls as snow. Consequently, the timing of runoff and recharge depend on accumulation and melting of the snowpack. Numerical modeling can provide insights into evolving hydrologic system response for resource management consideration. A daily mass and energy balance model known as the Deep Percolation Model (DPM) was developed for the basin in the 1990s. This model uses spatially distributed data and is driven with daily climate data to calculate both daily and monthly mass and energy balance for the major components of the hydrologic budget across the basin. Previously historical daily climate data from weather stations in the basin was used to drive the model. Now we use the University of Washington Climate Impact Group's 1/16th degree daily downscaled climate data to drive the DPM for forecasting until the end of the 21st century. The downscaled climate data is comprised from the mean of eight GCM simulations well suited to the Pacific Northwest. Furthermore, there are low emission and high emission scenarios associated with each ensemble member leading to two distinct means. For the entire basin progressing into the 21st century, output from the DPM using both emission scenarios as a forcing show changes in the timing of runoff and recharge as well as significant reductions in snowpack. Although the DPM calculated amounts of recharge and runoff varies between the emission scenario of the ensemble under consideration, all model output shows loss of the spring snowmelt runoff / recharge peak as time progresses. The response of the groundwater system to changing in the time and amount of recharge varies spatially. Short flow paths in the upper part of the basin are potentially more sensitive to the change in seasonality. However, geologic controls on the system cause this signal to attenuate as it propagates into the lower portions of the basin. This scale-dependent variation to the response of the groundwater system to changes in seasonality and magnitude of recharge is explored by applying DPM calculated recharge to an existing regional groundwater flow model.
APA, Harvard, Vancouver, ISO, and other styles
37

Cheng, Chen. "Semi-global Analysis of the Early Cold Stress Response Transcriptome of Developing Seedlings of Rice (Oryzasativa L.,japonica)." Fogler Library, University of Maine, 2006. http://www.library.umaine.edu/theses/pdf/ChengC2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Witte, Irene [Verfasser], and Thilo [Akademischer Betreuer] Streck. "Multi-objective and multi-variate global sensitivity analysis of the soil-crop model XN-CERES in Southwest Germany / Irene Witte ; Betreuer: Thilo Streck." Hohenheim : Kommunikations-, Informations- und Medienzentrum der Universität Hohenheim, 2021. http://nbn-resolving.de/urn:nbn:de:bsz:100-opus-19280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Meynaoui, Anouar. "New developments around dependence measures for sensitivity analysis : application to severe accident studies for generation IV reactors." Thesis, Toulouse, INSA, 2019. http://www.theses.fr/2019ISAT0028.

Full text
Abstract:
Dans le cadre des études de sûreté pour les réacteurs nucléaires, les simulateurs numériques sont essentiels pour comprendre, modéliser et prévoir des phénomènes physiques. Les informations relatives à certaines entrées des simulateurs sont souvent limitées ou incertaines. L'Analyse de Sensibilité Globale (ASG) vise alors à déterminer comment la variabilité des paramètres en entrée influe sur la valeur de la sortie ou de la quantité d’intérêt. Les travaux réalisés dans cette thèse ont pour objectif de proposer des nouvelles méthodes statistiques basées sur les mesures de dépendance pour l'ASG des simulateurs numériques. On s’intéresse plus particulièrement aux mesures de dépendance de type HSIC (Hilbert-Schmidt Independence Criterion). Après les Chapitres 1 et 2 introduisant le contexte général et les motivations de la thèse respectivement en versions française et anglaise, le Chapitre 3 présente d'abord une revue générale des mesures HSIC, dans un cadre théorique et méthodologique. Ensuite, des nouveaux développements autour de l'estimation des mesures HSIC à partir d'un échantillon alternatif et s'inspirant des techniques d'échantillonnage préférentiel sont proposés. Grâce à ces développements théoriques, une méthodologie efficace pour l’ASG en présence d’incertitudes sur les distributions de probabilité des entrées est développée dans le Chapitre 4. La pertinence de la méthodologie proposée est démontrée d’abord sur un cas analytique avant d’être appliquée au simulateur MACARENa modélisant un scénario accidentel de type ULOF (Unprotected Loss Of Flow), sur un réacteur à neutrons rapides refroidi au sodium. Le Chapitre 5 porte ensuite sur le développement d'un test d'indépendance agrégeant plusieurs paramétrisations des noyaux intervenant dans les HSIC. La méthodologie proposée permet ainsi de capturer un plus large spectre de dépendance entre les entrées et la sortie. L’optimalité de cette méthodologie est tout d'abord démontrée d’un point de vue théorique. Ses performances et son intérêt applicatif sont ensuite illustrés sur plusieurs exemples analytiques ainsi que sur le cas du simulateur MACARENa
As part of safety studies for nuclear reactors, numerical simulators are essential for understanding, modelling and predicting physical phenomena. However, the information on some of the input variables of the simulator is often limited or uncertain. In this framework, Global Sensitivity Analysis (GSA) aims at determining how the variability of the input parameters affects the value of the output or the quantity of interest. The work carried out in this thesis aims at proposing new statistical methods based on dependence measures for GSA of numerical simulators. We are particularly interested in HSIC-type dependence measures (Hilbert-Schmidt Independence Criterion). After Chapters 1 and 2 introducing the general context and motivations of the thesis in French and English versions respectively, Chapter 3 first presents a general review of HSIC measures, in a theoretical and methodological framework. Subsequently, new developments around the estimation of HSIC measures from an alternative sample and inspired by importance sampling techniques are proposed. As a result of these theoretical developments, an efficient methodology for GSA in the presence of uncertainties of input probability distributions is developed in Chapter 4. The relevance of the proposed methodology is first demonstrated on an analytical case before being applied to the MACARENa simulator modeling a ULOF (Unprotected Loss Of Flow) accidental scenario on a sodium-cooled fast neutron reactor. Finally, Chapter 5 deals with the development of an independence test aggregating several parametrizations of HSIC kernels and allowing to capture a wider spectrum of dependencies between the inputs and the output. The optimality of this methodology is first demonstrated from a theoretical point of view. Then, its performance and practical interest are illustrated on several analytical examples as well as on the test case of the MACARENa simulator
APA, Harvard, Vancouver, ISO, and other styles
40

Minunno, Francesco. "On the use of the bayesian approach for the calibration, evaluation and comparison of process-based forest models." Doctoral thesis, ISA/UL, 2014. http://hdl.handle.net/10400.5/7350.

Full text
Abstract:
Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia
Forest ecosystems have been experiencing fast and abrupt changes in the environmental conditions, that can increase their vulnerability to extreme events such as drought, heat waves, storms, fire. Process-based models can draw inferences about future environmental dynamics, but the reliability and robustness of vegetation models are conditional on their structure and their parametrisation. The main objective of the PhD was to implement and apply modern computational techniques, mainly based on Bayesian statistics, in the context of forest modelling. A variety of case studies was presented, spanning from growth predictions models to soil respiration models and process-based models. The great potential of the Bayesian method for reducing uncertainty in parameters and outputs and model evaluation was shown. Furthermore, a new methodology based on a combination of a Bayesian framework and a global sensitivity analysis was developed, with the aim of identifying strengths and weaknesses of process-based models and to test modifications in model structure. Finally, part of the PhD research focused on reducing the computational load to take full advantage of Bayesian statistics. It was shown how parameter screening impacts model performances and a new methodology for parameter screening, based on canonical correlation analysis, was presented
APA, Harvard, Vancouver, ISO, and other styles
41

Kyaw, Nang Thu Thu Dr. "Content Analysis of National Strategic Plans on HIV/AIDS and Global AIDS Response Progress Reports from Eight Southeast Asia Countries." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/iph_theses/263.

Full text
Abstract:
The purpose of this study is to explore the national policies, strategies, and programmatic responses on HIV/AIDS in eight Southeast Asia Countries by analyzing the contents of the National Strategic Plans on HIV/AIDS (NSPs) and biennial country progress report to UNAIDS from these countries. METHODS: Thematic content analysis method was used to analyze a total of 24 documents of the National Strategic Plan on HIV/AIDS and Global AIDS Response Progress Report submitted to UNAIDS from Cambodia, Indonesia, Laos, Malaysia, Myanmar, Philippines, Thailand and Viet Nam. NVivo10 qualitative analysis software was used for coding and organizing documents. RESULTS: 28 main categories with sub-categories emerged from coding and analysis of NSPs and country progress report documents from eight SEA countries. NSPs from all countries significantly failed to tackle key topics in policy, social and economic environment around HIV control such as women empowerment, illiteracy, armed conflicts, natural disaster and humanitarian emergencies. CONCLUSION: In order to align with the global HIV strategy to reach Millennium Development Goals to stop the spread of HIV by 2015, SEA countries should improve their NSPs and progress reports by addressing the political, social, cultural, and economic factors which urgently need to be addressed. New technologies and approaches are important for developing HIV interventions to stop the HIV epidemic, but addressing policy, economic and social environment around HIV epidemic and control in SEA regions is a key for those HIV intervention strategies and programs to be effective in controlling for HIV.
APA, Harvard, Vancouver, ISO, and other styles
42

Aroba, Nidhi. "Implementation and Success Analysis of Various Global Graduated Response Programs for Piracy with Special Focus on the "Six Strikes" Policy." Thesis, The University of Arizona, 2013. http://hdl.handle.net/10150/297511.

Full text
Abstract:
Since the advent and perpetual improvement of Internet technologies, digital piracy has become ubiquitous. As a result, digital piracy is a global issue which transcends national borders, governments, and legal jurisprudence. Several players such as national music and movie industries, political groups, and Internet freedom groups have become embroiled within the issue in order to better understand digital piracy. With this interest, a new field focusing on piracy research has emerged and researchers have begun to understand the demographics, motivations, methodologies, and implications behind Internet piracy. Meanwhile, industries argue that piracy hurts their bottom line and deters artists from creating new content. Through these arguments, these industries have been somewhat successful in implementing anti-piracy programs globally such as in France and most recently the United States. The new program, called the Copyright Alert System (CAS) or "Six Strikes", went into effect in early 2013 and it is unclear how effective it will be. This thesis will assess the success of international anti-piracy plans such as in France and China in relation to the U.S. plan. Further, this thesis will conduct and analyze a survey study to understand how U.S. Internet users will react to the new plan.
APA, Harvard, Vancouver, ISO, and other styles
43

Lu, Rong. "Statistical Methods for Functional Genomics Studies Using Observational Data." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1467830759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Aleryd, Sarah, and Garpenholt Lydia Frassine. "From Climate Change to Conflict : An analysis of the climate-conflict nexus in communications on climate change response." Thesis, Högskolan för lärande och kommunikation, Jönköping University, HLK, Globala studier, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49218.

Full text
Abstract:
This study explores the portrayal of the climate-conflict nexus in global and national communications on climate change response. It utilizes a qualitative inductive approach and the IPCC AR5 (2014) was chosen to represent global communication documents, while two Afghan communications, the Initial as well as Second National Communication, on climate change and response were used to represent the national level. Through a content analysis, several themes were discerned through which the climate-conflict nexus is portrayed. It can be concluded that there are several differences between the global versus Afghan communication documents, as well as between the Initial National Communication (2012) and the Second National Communication (2017). The Second National Communication overall attempts to mirror the communication used by the IPCC by using the same themes but in a more indirect way. The analysis finds that the climate-conflict nexus is often portrayed through indirect communication and that this leads to a lack of conflict-sensitivity in the Afghan national documents, concluding by making suggestions on how to improve conflict-sensitivity in these documents.
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Seungman. "Optimization and Simulation Based Cost-Benefit Analysis on a Residential Demand Response : Applications to the French and South Korean Demand Response Mechanisms." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED054.

Full text
Abstract:
À cause de la préoccupation mondiale sur les émissions de CO2, le changement climatique et la transition énergétique, nous faisons plus d'attention à la maîtrise de la demande d'électricité. En particulier, avec l'effacement de consommation électrique, nous pouvons profiter de plusieurs avantages, comme l'augmentation de l'efficacité de l'ensemble du marché de l'électricité, la sécurité d'approvisionnement d'électricité renforcée, et l'investissement plus efficace et souhaitable ainsi que l'avantage de l'environnement et le soutien aux énergies renouvelables. En Europe, la France a démarré le mécanisme de NEBEF à la fin de 2013, et la Corée du Sud a lancé le programme de l'effacement de consommation électrique basé sur le marché fin 2014. Parmi un certain nombre de questions et d’hypothèses que nous devons prendre en compte en termes de l'effacement, l'estimation de la courbe de référence est l'un des éléments les plus importants et les plus fondamentaux. Dans cette recherche, sur la base du profil de consommation redimensionné pour un ménage moyen, plusieurs méthodes d'estimation de la courbe de référence sont établies et examinées à la fois pour les mécanismes de l'effacement français et coréen. Cette investigation sur les méthodes de l'estimation pourrait contribuer à la recherche d'une méthode d'estimation meilleure et plus précise qui augmentera les motivations pour les participants. Avec les courbes de référence estimées, les analyses coûts-bénéfices ont été réalisées, elles-mêmes utilisées dans l'analyse décisionnelle pour les participants. Pour réaliser les analyses coûts-bénéfices, un modèle mathématique simple utilisant l'algèbre linéaire est créé et modifié afin de bien représenter les paramètres de chaque mécanisme de l'effacement. Ce modèle nous permet une compréhension intuitive et claire des mécanismes. Ce modèle générique peut être utilisé pour différents pays et secteurs, résidentiel, commercial et industriel, avec quelques modifications de modèle. La simulation de Monte Carlo est utilisée afin de refléter la nature stochastique de la réalité, et l'optimisation est également utilisée pour représenter et comprendre la rationalité des participants, et pour fournir des explications microéconomiques sur les comportements des participants. Afin de dégager des implications significatives pour une meilleure architecture du marché de l'effacement, plusieurs analyses de sensibilité sont effectuées sur les éléments clés du modèle pour les mécanismes
Worldwide concern on CO2 emissions, climate change, and the energy transition made us to pay more attention to Demand-side Management (DSM). In particular, with Demand Response (DR), we could expect several benefits, such as increased efficiency of the entire electricity market, enhanced security of electricity supply by reducing peak demand, and more efficient and desirable investment as well as the environmental advantage and the support for renewable energy sources. In Europe, France launched the NEBEF mechanism at the end of 2013, and South Korea inaugurated the market-based DR program at the end of 2014. Among a number of economic issues and assumptions that we need to take into consideration for DR, Customer Baseline Load (CBL) estimation is one of the most important and fundamental elements. In this research, based on the re-scaled load profile for an average household, several CBL estimation methods are established and examined thoroughly both for Korean and French DR mechanisms. This investigation on CBL estimation methods could contribute to searching for a better and accurate CBL estimation method that will increase the motivations for DR participants. With those estimated CBLs, the Cost-Benefit Analyses (CBAs) are conducted which, in turn, are utilized in the Decision-making Analysis for DR participants. For the CBAs, a simple mathematical model using linear algebra is set up and modified in order to well represent for each DR mechanism's parameters. With this model, it is expected to provide intuitive and clear understanding on DR mechanisms. This generic DR model can be used for different countries and sectors (e.g. residential, commercial, and industrial) with a few model modifications. The Monte Carlo simulation is used to reflect the stochastic nature of the reality and the optimization is also used to represent and understand the rationality of the DR participants, and to provide micro-economic explanations on DR participants' behaviours. In order to draw some meaningful implications for a better DR market design several Sensitivity Analyses (SAs) are conducted on the key elements of the model for DR mechanisms
APA, Harvard, Vancouver, ISO, and other styles
46

Matamba, Tshimangadzo Merline. "Statistical analysis of the ionospheric response during storm conditions over South Africa using ionosonde and GPS data." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1017899.

Full text
Abstract:
Ionospheric storms are an extreme form of space weather phenomena which affect space- and ground-based technological systems. Extreme solar activity may give rise to Coronal Mass Ejections (CME) and solar flares that may result in ionospheric storms. This thesis reports on a statistical analysis of the ionospheric response over the ionosonde stations Grahamstown (33.3◦S, 26.5◦E) and Madimbo (22.4◦S,30.9◦E), South Africa, during geomagnetic storm conditions which occurred during the period 1996 - 2011. Total Electron Content (TEC) derived from Global Positioning System (GPS) data by a dual Frequency receiver and an ionosonde at Grahamstown, was analysed for the storms that occurred during the period 2006 - 2011. A comprehensive analysis of the critical frequency of the F2 layer (foF2) and TEC was done. To identify the geomagnetically disturbed conditions the Disturbance storm time (Dst) index with a storm criteria of Dst ≤ −50 nT was used. The ionospheric disturbances were categorized into three responses, namely single disturbance, double disturbance and not significant (NS) ionospheric storms. Single disturbance ionospheric storms refer to positive (P) and negative (N) ionospheric storms observed separately, while double disturbance storms refer to negative and positive ionospheric storms observed during the same storm period. The statistics show the impact of geomagnetic storms on the ionosphere and indicate that negative ionospheric effects follow the solar cycle. In general, only a few ionospheric storms (0.11%) were observed during solar minimum. Positive ionospheric storms occurred most frequently (47.54%) during the declining phase of solar cycle 23. Seasonally, negative ionospheric storms occurred mostly during the summer (63.24%), while positive ionospheric storms occurred frequently during the winter (53.62%). An important finding is that only negative ionospheric storms were observed during great geomagnetic storm activity (Dst ≤ −350 nT). For periods when both ionosonde and GPS was available, the two data sets indicated similar ionospheric responses. Hence, GPS data can be used to effectively identify the ionospheric response in the absence of ionosonde data.
APA, Harvard, Vancouver, ISO, and other styles
47

Stewart, Dawn L. "Numerical Methods for Accurate Computation of Design Sensitivities." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30561.

Full text
Abstract:
This work is concerned with the development of computational methods for approximating sensitivities of solutions to boundary value problems. We focus on the continuous sensitivity equation method and investigate the application of adaptive meshing and smoothing projection techniques to enhance the basic scheme. The fundamental ideas are first developed for a one dimensional problem and then extended to 2-D flow problems governed by the incompressible Navier-Stokes equations. Numerical experiments are conducted to test the algorithms and to investigate the benefits of adaptivity and smoothing.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Dolcine, Leslie. "Prévision quantitative à très courte échéance de la pluie : modèle global adapté à l'information radar." Grenoble 1, 1997. http://www.theses.fr/1997GRE10067.

Full text
Abstract:
Une prevision quantitative a tres courte echeance des precipitations peut contribuer a ameliorer la prevision des crues des bassins versants a risque, ou la gestion des systemes d'assainissement pluvial urbain. Cette prevision est realisee, jusqu'a present, par simple advection des observations radar, et suppose que la dynamique du nuage precipitant est stationnaire. L'utilisation multi-site des radars permettant une exploration volumique de l'atmosphere, la possibilite de disposer en temps reel de donnees meteorologiques au sol et de donnees satellite, ont favorise le developpement d'une nouvelle approche de prevision quantitative de la pluie. Dans cette approche, le nuage precipitant est conceptualise comme une colonne atmospherique dont le temps de reponse depend des parametres microphysiques des precipitations et du profil vertical de contenu en eau precipitante de cette colonne. Les equations regissant l'evolution de cette colonne atmospherique sont deduites des equations de continuite pour l'air, la vapeur d'eau, l'eau nuageuse et l'eau precipitante ainsi que des lois de la thermodynamique et d'une microphysique simplifiee. Des ameliorations graduelles ont ete introduites dans le modele global de depart qui se ramenait a l'equation d'evolution de l'eau liquide precipitante. Une equation supplementaire pour la description de la vitesse verticale et la prise en compte du renforcement orographique de la pluie tres importante en region montagneuse a ete ajoutee. Ce modele, applique a des evenements pluvieux de l'experience radar des cevennes et a des evenements pluvieux simules, s'est montre superieur dans la majorite des cas a deux methodes de prevision plus simples : les methodes de persistance et d'advection. Une analyse de sensibilite a montre l'importance de la vitesse verticale et la faible influence des donnees meteorologiques au sol sur les resultats du modele. La qualite de la prevision dans l'approche globale depend de la vision tridimensionnelle du champ pluvieux, de la variabilite de la pluie et de la validite des hypotheses d'evolution. Pour une meilleure description des champs pluvieux fortement variables, la formulation du modele global a ete etendue afin d'inclure l'eau nuageuse. L'interet potentiel de ce modele a ete demontre par comparaison a un modele microphysique. L'estimation de l'eau nuageuse reste cependant un prealable a l'evaluation de cette nouvelle formulation sur des donnees reelles.
APA, Harvard, Vancouver, ISO, and other styles
49

Szepietowska, Katarzyna. "POLYNOMIAL CHAOS EXPANSION IN BIO- AND STRUCTURAL MECHANICS." Thesis, Bourges, INSA Centre Val de Loire, 2018. http://www.theses.fr/2018ISAB0004/document.

Full text
Abstract:
Cette thèse présente une approche probabiliste de la modélisation de la mécanique des matériaux et des structures. Le dimensionnement est influencé par l'incertitude des paramètres d'entrée. Le travail est interdisciplinaire et les méthodes décrites sont appliquées à des exemples de biomécanique et de génie civil. La motivation de ce travail était le besoin d'approches basées sur la mécanique dans la modélisation et la simulation des implants utilisés dans la réparation des hernies ventrales. De nombreuses incertitudes apparaissent dans la modélisation du système implant-paroi abdominale. L'approche probabiliste proposée dans cette thèse permet de propager ces incertitudes et d’étudier leurs influences respectives. La méthode du chaos polynomial basée sur la régression est utilisée dans ce travail. L'exactitude de ce type de méthodes non intrusives dépend du nombre et de l'emplacement des points de calcul choisis. Trouver une méthode universelle pour atteindre un bon équilibre entre l'exactitude et le coût de calcul est encore une question ouverte. Différentes approches sont étudiées dans cette thèse afin de choisir une méthode efficace et adaptée au cas d’étude. L'analyse de sensibilité globale est utilisée pour étudier les influences des incertitudes d'entrée sur les variations des sorties de différents modèles. Les incertitudes sont propagées aux modèles implant-paroi abdominale. Elle permet de tirer des conclusions importantes pour les pratiques chirurgicales. À l'aide de l'expertise acquise à partir de ces modèles biomécaniques, la méthodologie développée est utilisée pour la modélisation de joints de bois historiques et la simulation de leur comportement mécanique. Ce type d’étude facilite en effet la planification efficace des réparations et de la rénovation des bâtiments ayant une valeur historique
This thesis presents a probabilistic approach to modelling the mechanics of materials and structures where the modelled performance is influenced by uncertainty in the input parameters. The work is interdisciplinary and the methods described are applied to medical and civil engineering problems. The motivation for this work was the necessity of mechanics-based approaches in the modelling and simulation of implants used in the repair of ventral hernias. Many uncertainties appear in the modelling of the implant-abdominal wall system. The probabilistic approach proposed in this thesis enables these uncertainties to be propagated to the output of the model and the investigation of their respective influences. The regression-based polynomial chaos expansion method is used here. However, the accuracy of such non-intrusive methods depends on the number and location of sampling points. Finding a universal method to achieve a good balance between accuracy and computational cost is still an open question so different approaches are investigated in this thesis in order to choose an efficient method. Global sensitivity analysis is used to investigate the respective influences of input uncertainties on the variation of the outputs of different models. The uncertainties are propagated to the implant-abdominal wall models in order to draw some conclusions important for further research. Using the expertise acquired from biomechanical models, modelling of historic timber joints and simulations of their mechanical behaviour is undertaken. Such an investigation is important owing to the need for efficient planning of repairs and renovation of buildings of historical value
APA, Harvard, Vancouver, ISO, and other styles
50

Singh, Kumaresh. "Efficient Computational Tools for Variational Data Assimilation and Information Content Estimation." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/39125.

Full text
Abstract:
The overall goals of this dissertation are to advance the field of chemical data assimilation, and to develop efficient computational tools that allow the atmospheric science community benefit from state of the art assimilation methodologies. Data assimilation is the procedure to combine data from observations with model predictions to obtain a more accurate representation of the state of the atmosphere. As models become more complex, determining the relationships between pollutants and their sources and sinks becomes computationally more challenging. The construction of an adjoint model ( capable of efficiently computing sensitivities of a few model outputs with respect to many input parameters ) is a difficult, labor intensive, and error prone task. This work develops adjoint systems for two of the most widely used chemical transport models: Harvardâ s GEOS-Chem global model and for Environmental Protection Agencyâ s regional CMAQ regional air quality model. Both GEOS-Chem and CMAQ adjoint models are now used by the atmospheric science community to perform sensitivity analysis and data assimilation studies. Despite the continuous increase in capabilities, models remain imperfect and models alone cannot provide accurate long term forecasts. Observations of the atmospheric composition are now routinely taken from sondes, ground stations, aircraft, and satellites, etc. This work develops three and four dimensional variational data assimilation capabilities for GEOS-Chem and CMAQ which allow to estimate chemical states that best fit the observed reality. Most data assimilation systems to date use diagonal approximations of the background covariance matrix which ignore error correlations and may lead to inaccurate estimates. This dissertation develops computationally efficient representations of covariance matrices that allow to capture spatial error correlations in data assimilation. Not all observations used in data assimilation are of equal importance. Erroneous and redundant observations not only affect the quality of an estimate but also add unnecessary computational expense to the assimilation system. This work proposes techniques to quantify the information content of observations used in assimilation; information-theoretic metrics are used. The four dimensional variational approach to data assimilation provides accurate estimates but requires an adjoint construction, and uses considerable computational resources. This work studies versions of the four dimensional variational methods (Quasi 4D-Var) that use approximate gradients and are less expensive to develop and run. Variational and Kalman filter approaches are both used in data assimilation, but their relative merits and disadvantages in the context of chemical data assimilation have not been assessed. This work provides a careful comparison on a chemical assimilation problem with real data sets. The assimilation experiments performed here demonstrate for the first time the benefit of using satellite data to improve estimates of tropospheric ozone.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography