To see the other types of publications on this topic, follow the link: Sensitivity indices.

Dissertations / Theses on the topic 'Sensitivity indices'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic 'Sensitivity indices.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

GIOIA, PAOLA. "Towards more accurate measures of global sensitivity analysis. Investigation of first and total order indices." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/45695.

Full text
Abstract:
A new technique for estimating variance–based total sensitivity indices from given data is developed. It is also develped a new approach for the estimation of the first order effects given a specific sample design. This method adopts the RBD approach published by Tarantola et al., (2007) for the computation of first order sensitivity indices in association to Quasi–Random numbers.
APA, Harvard, Vancouver, ISO, and other styles
2

Fernandez, Chas Margarita. "Insulin sensitivity estimates from a linear model of glucose disappearance." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moore, Alan D. "Reproducibility and sensitivity of Doppler echocardiographic indices of left ventricular function during exercise." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/53648.

Full text
Abstract:
The two most common methods used for the assessment of left ventricular function (LVF) are two-dimensional echocardiography and nuclear ventriculography. Recent technological advances have led to the development of an inexpensive, noninvasive alternative: the stand-alone continuous wave Doppler echocardiograph. The purposes cf this study were twofold: 1) to examine the repeatability of three Doppler measured indices of LVF during repeated exercise trials, and 2) to determine if induced changes in myocardial contractility would be reflected by changes in the Doppler indices. The Doppler indices of LVF were the peak acceleration of ascending aortic blood (pkA), peak Velocity of ascending aortic blood (pkV), and the integral of the Velocity-time waveform (SVI). The study was conducted in two phases. In the first phase, 44 young, healthy males performed similar graded cycle exercise tasks on two separate days. Exercise levels were increased by 50 W every three minutes. PkA, pkV, SVI, blood pressure, heart rate and oxygen consumption were recorded every stage. The test was continued until the subject reached symptom-limited maximum. Pearson product-moment correlation coefficients were used to determine the reproducibility of the dependent measures between the two tests. The second phase involved the testing of a subset of the original 44 subjects (N=18) under a placebo (control) condition, acute beta-blockade, and oral hyperhydration states. Hematocrit was measured as a means to assess blood volume changes. The subjects exercised at levels requiring 20, 40 and 60% of their maximum oxygen consumption. Each stage lasted six minutes. PkA, pkV, SVI, heart rate, blood pressure, cardiac output, and stroke volume were measured. The latter two were determined by a carbon dioxide rebreathing technique. This was a split-plot design with multiple dependent measures. The statistical analysis was a multivariate analysis of variance (MANOVA) with repeated measures. Appropriate univariate tests were utilized as post-hoc procedures. With respect to the first phase, the correlation coefficients for pkA ranged from 0.54-0.81, for pkV, 0.65-0.77, and for SVI, 0.40-0.71. The results of the second phase indicated that alterations in contractile status by beta-blockade was reflected by changes in the Doppler measures, but the hyperhydration state did not produce a change in cardiac contractile response that was detectable. There were no documented changes in plasma volume as measured by change in hematocrit, therefore, the effectiveness of the hyperhydration procedure was judged ineffective. PkA and pkV were significantly reduced (p<.01) at all stages of exercise in the beta-blocked state as compared to the placebo values. Cardiac output and heart rate were significantly lower in the beta-blocked state, and stroke volume was significantly higher. The results of this experiment indicates that continuous wave Doppler echocardiographic estimates of LVF are reproducible (r=0.40-0.81) and reflect changes in myocardial contractility induced by acute beta-blockade.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Wajahat, Qazi Hassan. "Development of Sensitivity Based Indices for Optimal Placement of UPFC to Minimize Load Curtailment Requirements." Thesis, KTH, Elektriska energisystem, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chastaing, Gaëlle. "Indices de Sobol généralisés par variables dépendantes." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM046.

Full text
Abstract:
Dans un modèle qui peut s'avérer complexe et fortement non linéaire, les paramètres d'entrée, parfois en très grand nombre, peuvent être à l'origine d'une importante variabilité de la sortie. L'analyse de sensibilité globale est une approche stochastique permettant de repérer les principales sources d'incertitude du modèle, c'est-à-dire d'identifier et de hiérarchiser les variables d'entrée les plus influentes. De cette manière, il est possible de réduire la dimension d'un problème, et de diminuer l'incertitude des entrées. Les indices de Sobol, dont la construction repose sur une décomposition de la variance globale du modèle, sont des mesures très fréquemment utilisées pour atteindre de tels objectifs. Néanmoins, ces indices se basent sur la décomposition fonctionnelle de la sortie, aussi connue soue le nom de décomposition de Hoeffding. Mais cette décomposition n'est unique que si les variables d'entrée sont supposées indépendantes. Dans cette thèse, nous nous intéressons à l'extension des indices de Sobol pour des modèles à variables d'entrée dépendantes. Dans un premier temps, nous proposons une généralisation de la décomposition de Hoeffding au cas où la forme de la distribution des entrées est plus générale qu'une distribution produit. De cette décomposition généralisée aux contraintes d'orthogonalité spécifiques, il en découle la construction d'indices de sensibilité généralisés capable de mesurer la variabilité d'un ou plusieurs facteurs corrélés dans le modèle. Dans un second temps, nous proposons deux méthodes d'estimation de ces indices. La première est adaptée à des modèles à entrées dépendantes par paires. Elle repose sur la résolution numérique d'un système linéaire fonctionnel qui met en jeu des opérateurs de projection. La seconde méthode, qui peut s'appliquer à des modèles beaucoup plus généraux, repose sur la construction récursive d'un système de fonctions qui satisfont les contraintes d'orthogonalité liées à la décomposition généralisée. En parallèle, nous mettons en pratique ces différentes méthodes sur différents cas tests
A mathematical model aims at characterizing a complex system or process that is too expensive to experiment. However, in this model, often strongly non linear, input parameters can be affected by a large uncertainty including errors of measurement of lack of information. Global sensitivity analysis is a stochastic approach whose objective is to identify and to rank the input variables that drive the uncertainty of the model output. Through this analysis, it is then possible to reduce the model dimension and the variation in the output of the model. To reach this objective, the Sobol indices are commonly used. Based on the functional ANOVA decomposition of the output, also called Hoeffding decomposition, they stand on the assumption that the incomes are independent. Our contribution is on the extension of Sobol indices for models with non independent inputs. In one hand, we propose a generalized functional decomposition, where its components is subject to specific orthogonal constraints. This decomposition leads to the definition of generalized sensitivity indices able to quantify the dependent inputs' contribution to the model variability. On the other hand, we propose two numerical methods to estimate these constructed indices. The first one is well-fitted to models with independent pairs of dependent input variables. The method is performed by solving linear system involving suitable projection operators. The second method can be applied to more general models. It relies on the recursive construction of functional systems satisfying the orthogonality properties of summands of the generalized decomposition. In parallel, we illustrate the two methods on numerical examples to test the efficiency of the techniques
APA, Harvard, Vancouver, ISO, and other styles
6

Horiguchi, Akira. "Bayesian Additive Regression Trees: Sensitivity Analysis and Multiobjective Optimization." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1606841319315633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Masinde, Brian. "Birds' Flight Range. : Sensitivity Analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166248.

Full text
Abstract:
’Flight’ is a program that uses flight mechanics to estimate the flight range of birds. This program, used by ornithologists, is only available for Windows OS. It requires manual imputation of body measurements and constants (one observation at a time) and this is time-consuming. Therefore, the first task is to implement the methods in R, a programming language that runs on various platforms. The resulting package named flying, has three advantages; first, it can estimate flight range of multiple bird observations, second, it makes it easier to experiment with different settings (e.g. constants) in comparison to Flight and third, it is open-source making contribution relatively easy. Uncertainty and global sen- sitivity analyses are carried out on body measurements separately and with various con- stants. In doing so, the most influential body variables and constants are discovered. This task would have been near impossible to undertake using ’Flight’. A comparison is made amongst the results from a crude partitioning method, generalized additive model, gradi- ent boosting machines and quasi-Monte Carlo method. All of these are based on Sobol’s method for variance decomposition. The results show that fat mass drives the simulations with other inputs playing a secondary role (for example mechanical conversion efficiency and body drag coefficient).
APA, Harvard, Vancouver, ISO, and other styles
8

Seol, Huynsoo. "Sensitivity of five Rasch-model-based fit indices to selected person and item aberrances : a simulation study /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487949508369046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heredia, Guzman Maria Belen. "Contributions to the calibration and global sensitivity analysis of snow avalanche numerical models." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU028.

Full text
Abstract:
Une avalanche de neige est un danger naturel défini comme une masse de neige en mouvement rapide. Depuis les années 30, scientifiques conçoivent des modèles d'avalanche de neige pour décrire ce phénomène. Cependant, ces modèles dépendent de certains paramètres d'entrée mal connus qui ne peuvent pas être mesurés. Pour mieux comprendre les paramètres d'entrée du modèle et les sorties du modèle, les objectifs de cette thèse sont (i) de proposer un cadre pour calibrer les paramètres d'entrée et (ii) de développer des méthodes pour classer les paramètres d'entrée en fonction de leur importance dans le modèle en tenant compte la nature fonctionnelle des sorties. Dans ce cadre, nous développons des méthodes statistiques basées sur l'inférence bayésienne et les analyses de sensibilité globale. Nos développements sont illustrés sur des cas de test et des données réelles des avalanches de neige.D'abord, nous proposons une méthode d'inférence bayésienne pour récupérer la distribution des paramètres d'entrée à partir de séries chronologiques de vitesse d'avalanche ayant été collectées sur des sites de test expérimentaux. Nos résultats montrent qu'il est important d'inclure la structure d'erreur (dans notre cas l'autocorrélation) dans la modélisation statistique afin d'éviter les biais dans l'estimation des paramètres de frottement.Deuxièmement, pour identifier les paramètres d'entrée importants, nous développons deux méthodes basées sur des mesures de sensibilité basées sur la variance. Pour la première méthode, nous supposons que nous avons un échantillon de données et nous voulons estimer les mesures de sensibilité avec cet échantillon. Dans ce but, nous développons une procédure d'estimation non paramétrique basée sur l'estimateur de Nadaraya-Watson pour estimer les indices agrégés de Sobol. Pour la deuxième méthode, nous considérons le cadre où l'échantillon est obtenu à partir de règles d'acceptation/rejet correspondant à des contraintes physiques. L'ensemble des paramètres d'entrée devient dépendant du fait de l'échantillonnage d'acceptation-rejet, nous proposons donc d'estimer les effets de Shapley agrégés (extension des effets de Shapley à des sorties multivariées ou fonctionnelles). Nous proposons également un algorithme pour construire des intervalles de confiance bootstrap. Pour l'application du modèle d'avalanche de neige, nous considérons différents scénarios d'incertitude pour modéliser les paramètres d'entrée. Dans nos scénarios, la position et le volume de départ de l'avalanche sont les entrées les plus importantes.Nos contributions peuvent aider les spécialistes des avalanches à (i) prendre en compte la structure d'erreur dans la calibration du modèle et (ii) proposer un classementdes paramètres d'entrée en fonction de leur importance dans les modèles en utilisant des approches statistiques
Snow avalanche is a natural hazard defined as a snow mass in fast motion. Since the thirties, scientists have been designing snow avalanche models to describe snow avalanches. However, these models depend on some poorly known input parameters that cannot be measured. To understand better model input parameters and model outputs, the aims of this thesis are (i) to propose a framework to calibrate input parameters and (ii) to develop methods to rank input parameters according to their importance in the model taking into account the functional nature of outputs. Within these two purposes, we develop statistical methods based on Bayesian inference and global sensitivity analyses. All the developments are illustrated on test cases and real snow avalanche data.First, we propose a Bayesian inference method to retrieve input parameter distribution from avalanche velocity time series having been collected on experimental test sites. Our results show that it is important to include the error structure (in our case the autocorrelation) in the statistical modeling in order to avoid bias for the estimation of friction parameters.Second, to identify important input parameters, we develop two methods based on variance based measures. For the first method, we suppose that we have a given data sample and we want to estimate sensitivity measures with this sample. Within this purpose, we develop a nonparametric estimation procedure based on the Nadaraya-Watson kernel smoother to estimate aggregated Sobol' indices. For the second method, we consider the setting where the sample is obtained from acceptance/rejection rules corresponding to physical constraints. The set of input parameters become dependent due to the acceptance-rejection sampling, thus we propose to estimate aggregated Shapley effects (extension of Shapley effects to multivariate or functional outputs). We also propose an algorithm to construct bootstrap confidence intervals. For the snow avalanche model application, we consider different uncertainty scenarios to model the input parameters. Under our scenarios, the release avalanche position and volume are the most crucial inputs.Our contributions should help avalanche scientists to (i) account for the error structure in model calibration and (ii) rankinput parameters according to their importance in the models using statistical methods
APA, Harvard, Vancouver, ISO, and other styles
10

Tissot, Jean-yves. "Sur la décomposition ANOVA et l'estimation des indices de Sobol'. Application à un modèle d'écosystème marin." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM064/document.

Full text
Abstract:
Dans les domaines de la modélisation et de la simulation numérique, les simulateurs développés prennent parfois en compte de nombreux paramètres dont l'impact sur les sorties n'est pas toujours bien connu. L'objectif principal de l'analyse de sensibilité est d'aider à mieux comprendre comment les sorties d'un modèle sont sensibles aux variations de ces paramètres. L'approche la mieux adaptée pour appréhender ce problème dans le cas de modèles potentiellement complexes et fortement non linéaires repose sur la décomposition ANOVA et les indices de Sobol'. En particulier, ces derniers permettent de quantifier l'influence de chacun des paramètres sur la réponse du modèle. Dans cette thèse, nous nous intéressons au problème de l'estimation des indices de Sobol'. Dans une première partie, nous réintroduisons de manière rigoureuse des méthodes existantes au regard de l'analyse harmonique discrète sur des groupes cycliques et des tableaux orthogonaux randomisés. Cela nous permet d'étudier les propriétés théoriques de ces méthodes et de les généraliser. Dans un second temps, nous considérons la méthode de Monte Carlo spécifique à l'estimation des indices de Sobol' et nous introduisons une nouvelle approche permettant de l'améliorer. Cette amélioration est construite autour des hypercubes latins et permet de réduire le nombre de simulations nécessaires pour estimer les indices de Sobol' par cette méthode. En parallèle, nous mettons en pratique ces différentes méthodes sur un modèle d'écosystème marin
In the fields of modelization and numerical simulation, simulators generally depend on several input parameters whose impact on the model outputs are not always well known. The main goal of sensitivity analysis is to better understand how the model outputs are sensisitive to the parameters variations. One of the most competitive method to handle this problem when complex and potentially highly non linear models are considered is based on the ANOVA decomposition and the Sobol' indices. More specifically the latter allow to quantify the impact of each parameters on the model response. In this thesis, we are interested in the issue of the estimation of the Sobol' indices. In the first part, we revisit in a rigorous way existing methods in light of discrete harmonic analysis on cyclic groups and randomized orthogonal arrays. It allows to study theoretical properties of this method and to intriduce generalizations. In a second part, we study the Monte Carlo method for the Sobol' indices and we introduce a new approach to reduce the number of simulations of this method. In parallel with this theoretical work, we apply these methods on a marine ecosystem model
APA, Harvard, Vancouver, ISO, and other styles
11

Nasralla, Eman Abdulwahhab. "Metabolic syndrome and relation of obesity indices to biomarkers of insulin sensitivity and inflammation among Qatari men and women : the Qatar Biobank Project." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/34919.

Full text
Abstract:
Background: Increased body fatness along with other conditions typical of the metabolic syndrome (MetS) such as insulin resistance have become more prevalent in Qatar due to rapid transitions in the Qatari's population lifestyle in the last few decades. The government of Qatar is seeking to improve the public's health; however, epidemiological studies on Qataris are limited. Aims: This research aims to 1) describe the features of the MetS and its determinants among a sample of Qataris, 2) explore the difference between four obesity subgroups regarding selected factors of metabolic health and 3) investigate the association of total and central body fatness indices with C-peptide and glycated haemoglobin A1c (HbA1c) as insulin sensitivity biomarkers and with fibrinogen as a biomarker of inflammation. Methods: This is a cross-sectional study of 879 Qatari men and women from the Qatar Biobank pilot phase. The MetS prevalence was estimated using the National Cholesterol Education Programme Adult Panel III (NCEP ATPIII), International Diabetes Federation (IDF) and the harmonised criteria. Metabolic health status for the four obesity subgroups (metabolically-healthy normal weight, metabolically-abnormal normal weight, metabolically-healthy obese and metabolically-abnormal obese) was identified using the harmonised guidelines. Multiple linear regression analyses were conducted to test the relation of the obesity indices (including body mass index, body fat percentage, waist circumference (WC) and waist-hip ratio) to C-peptide, HbA1c and fibrinogen. Results: The prevalence of the MetS was 18.4% (NCEP ATPIII), 27.0% (IDF), and 28.9% (harmonised definition). Central obesity was the most prevalent determinant of the MetS. There were significant differences in multiple factors of metabolic health for each of the four obesity subgroups. There were strong positive associations between the examined obesity indices and C-peptide, HbA1c and fibrinogen. WC had the strongest positive association with C-peptide, HbA1c and fibrinogen compared to the other examined body fatness indices. Conclusions: The current findings suggest that future interventions should target reducing WC in Qataris. The four obesity subgroups differed significantly regarding multiple factors of metabolic health; this implies that they might need to be treated differently. More epidemiological studies are needed to aid the Qatari government in their decision making to improve the public's health.
APA, Harvard, Vancouver, ISO, and other styles
12

Lu, Rong. "Statistical Methods for Functional Genomics Studies Using Observational Data." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1467830759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Broto, Baptiste. "Sensitivity analysis with dependent random variables : Estimation of the Shapley effects for unknown input distribution and linear Gaussian models." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS119.

Full text
Abstract:
L'analyse de sensibilité est un outil puissant qui permet d'analyser des modèles mathématiques et des codes de calculs. Elle révèle les variables d'entrées les plus influentes sur la variable de sortie, en leur affectant une valeur appelée "indice de sensibilité". Dans ce cadre, les effets de Shapley, récemment définis par Owen, permettent de gérer des variables d'entrées dépendantes. Cependant, l'estimation de ces indices ne peut se faire que dans deux cadres très particuliers : lorsque la loi du vecteur d'entrée est connue ou lorsque les entrées sont gaussiennes et le modèle est linéaire. Cette thèse se divise en deux parties. Dans la première partie, l'objectif est d'étendre les estimateurs des effets de Shapley lorsque seul un échantillon des entrées est disponible et leur loi est inconnue. Dans la deuxième partie porte sur le cas linéaire gaussien. Le problème de la grande dimension est abordé et des solutions sont proposées lorsque les variables forment des groupes indépendants. Enfin, l'étude montre comment les effets de Shapley du cadre linéaire gaussien peuvent estimer ceux d'un cadre plus général
Sensitivity analysis is a powerful tool to study mathematical models and computer codes. It reveals the most impacting input variables on the output variable, by assigning values to the the inputs, that we call "sensitivity indices". In this setting, the Shapley effects, recently defined by Owen, enable to handle dependent input variables. However, one can only estimate these indices in two particular cases: when the distribution of the input vector is known or when the inputs are Gaussian and when the model is linear. This thesis can be divided into two parts. First, the aim is to extend the estimation of the Shapley effects when only a sample of the inputs is available and their distribution is unknown. The second part focuses on the linear Gaussian framework. The high-dimensional problem is emphasized and solutions are suggested when there are independent groups of variables. Finally, it is shown how the values of the Shapley effects in the linear Gaussian framework can estimate of the Shapley effects in more general settings
APA, Harvard, Vancouver, ISO, and other styles
14

Christian, Steve Clarence. "A sensitivity analysis of a heuristic model used for the placement allocation of utilities in transportation right-of-way corridors." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Gilquin, Laurent. "Échantillonnages Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol' : application à un modèle transport-urbanisme." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM042/document.

Full text
Abstract:
Le développement et l'utilisation de modèles intégrés transport-urbanisme sont devenus une norme pour représenter les interactions entre l'usage des sols et le transport de biens et d'individus sur un territoire. Ces modèles sont souvent utilisés comme outils d'aide à la décision pour des politiques de planification urbaine.Les modèles transport-urbanisme, et plus généralement les modèles mathématiques, sont pour la majorité conçus à partir de codes numériques complexes. Ces codes impliquent très souvent des paramètres dont l'incertitude est peu connue et peut potentiellement avoir un impact important sur les variables de sortie du modèle.Les méthodes d'analyse de sensibilité globales sont des outils performants permettant d'étudier l'influence des paramètres d'un modèle sur ses sorties. En particulier, les méthodes basées sur le calcul des indices de sensibilité de Sobol' fournissent la possibilité de quantifier l'influence de chaque paramètre mais également d'identifier l'existence d'interactions entre ces paramètres.Dans cette thèse, nous privilégions la méthode dite à base de plans d'expériences répliqués encore appelée méthode répliquée. Cette méthode a l'avantage de ne requérir qu'un nombre relativement faible d'évaluations du modèle pour calculer les indices de Sobol' d'ordre un et deux.Cette thèse se focalise sur des extensions de la méthode répliquée pour faire face à des contraintes issues de notre application sur le modèle transport-urbanisme Tranus, comme la présence de corrélation entre paramètres et la prise en compte de sorties multivariées.Nos travaux proposent également une approche récursive pour l'estimation séquentielle des indices de Sobol'. L'approche récursive repose à la fois sur la construction itérative d'hypercubes latins et de tableaux orthogonaux stratifiés et sur la définition d'un nouveau critère d'arrêt. Cette approche offre une meilleure précision sur l'estimation des indices tout en permettant de recycler des premiers jeux d'évaluations du modèle. Nous proposons aussi de combiner une telle approche avec un échantillonnage quasi-Monte Carlo.Nous présentons également une application de nos contributions pour le calage du modèle de transport-urbanisme Tranus
Land Use and Transportation Integrated (LUTI) models have become a norm for representing the interactions between land use and the transportation of goods and people in a territory. These models are mainly used to evaluate alternative planning scenarios, simulating their impact on land cover and travel demand.LUTI models and other mathematical models used in various fields are most of the time based on complex computer codes. These codes often involve poorly-known inputs whose uncertainty can have significant effects on the model outputs.Global sensitivity analysis methods are useful tools to study the influence of the model inputs on its outputs. Among the large number of available approaches, the variance based method introduced by Sobol' allows to calculate sensitivity indices called Sobol' indices. These indices quantify the influence of each model input on the outputs and can detect existing interactions between inputs.In this framework, we favor a particular method based on replicated designs of experiments called replication method. This method appears to be the most suitable for our application and is advantageous as it requires a relatively small number of model evaluations to estimate first-order or second-order Sobol' indices.This thesis focuses on extensions of the replication method to face constraints arising in our application on the LUTI model Tranus, such as the presence of dependency among the model inputs, as far as multivariate outputs.Aside from that, we propose a recursive approach to sequentially estimate Sobol' indices. The recursive approach is based on the iterative construction of stratified designs, latin hypercubes and orthogonal arrays, and on the definition of a new stopping criterion. With this approach, more accurate Sobol' estimates are obtained while recycling previous sets of model evaluations. We also propose to combine such an approach with quasi-Monte Carlo sampling.An application of our contributions on the LUTI model Tranus is presented
APA, Harvard, Vancouver, ISO, and other styles
16

Derennes, Pierre. "Mesures de sensibilité de Borgonovo : estimation des indices d'ordre un et supérieur, et application à l'analyse de fiabilité." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30039.

Full text
Abstract:
Dans de nombreuses disciplines, un système complexe est modélisé par une fonction boîte noire dont le but est de simuler le comportement du système réel. Le système est donc représenté par un modèle entrée-sortie, i.e, une relation entre la sortie Y (ce que l'on observe sur le système) et un ensemble de paramètres extérieurs Xi (représentant typiquement des variables physiques). Ces paramètres sont usuellement supposés aléatoires pour prendre en compte les incertitudes phénoménologiques inhérentes au système. L'analyse de sensibilité globale joue alors un rôle majeur dans la gestion de ces incertitudes et dans la compréhension du comportement du système. Cette étude repose sur l'estimation de mesures d'importance dont le rôle est d'identifier et de classifier les différentes entrées en fonction de leur influence sur la sortie du modèle. Les indices de Sobol, dont l'objectif est de quantifier la contribution d'une variable d'entrée (ou d'un groupe de variables) à la variance de la sortie, figurent parmi les mesures d'importance les plus considérées. Néanmoins, la variance est une représentation potentiellement restrictive de la variabilité du modèle de sortie. Le sujet central de cette thèse porte sur une méthode alternative, introduite par Emanuele Borgonovo, et qui est basée sur l'analyse de l'ensemble de la distribution de sortie. Les mesures d'importance de Borgonovo admettent des propriétés très utiles en pratique qui justifient leur récent gain d'intérêt, mais leur estimation constitue un problème complexe. En effet, la définition initiale des indices de Borgonovo fait intervenir les densités inconditionnelles et conditionnelles de la sortie du modèle, malheureusement inconnues en pratique. Dès lors, les premières méthodes proposées menaient à un budget de simulation élevé, la fonction boite noire pouvant être très coûteuse à évaluer. La première contribution de cette thèse consiste à proposer de nouvelles méthodologies pour estimer les mesures d'importance de Borgonovo du premier ordre, i.e, les indices mesurant l'influence de la sortie Y relativement à une entrée Xi scalaire. Dans un premier temps, nous choisissons d'adopter la réinterprétation des indices de Borgonovo en terme de mesure de dépendance, i.e, comme une distance entre la densité jointe de Xi et Y et la distribution produit. En outre, nous développons une procédure d'estimation combinant échantillonnage préférentiel et approximation par noyau gaussien de la densité de sortie et de la densité jointe. Cette approche permet de calculer l'ensemble des indices de Borgonovo d'ordre 1, et ce, avec un faible budget de simulation indépendant de la dimension du modèle. Cependant, l'utilisation de l'estimation par noyau gaussien peut fournir des estimations imprécises dans le cas des distributions à queue lourde. Pour pallier ce problème, nous nous appuyons dans un second temps sur une autre définition des indices de Borgonovo reposant sur le formalisme des copules
In many disciplines, a complex system is modeled by a black box function whose purpose is to mimic the real system behavior. Then, the system is represented by an input-output model, i.e, a relationship between the output Y (the observation made on the system) and a set of external parameters Xi (typically representing physical variables). These parameters are usually assumed to be random in order to take phenomenological uncertainties into account. Then, global sensitivity analysis (GSA) plays a crucial role in the handling of these uncertainties and in the understanding of the system behavior. This study is based on the estimation of importance measures which aim at identifying and ranking the different inputs with respect to their influence on the model output. Variance-based sensitivity indices are one of the most widely used GSA measures. They are based on Sobol's indices which express the share of variance of the output that is due to a given input or input combination. However, by definition they only study the impact on the second-order moment of the output which may a restrictive representation of the whole output distribution. The central subject of this thesis is an alternative method, introduced by Emanuele Borgonovo, which is based on the analysis of the whole output distribution. Borgonovo's importance measures present very convenient properties that justify their recent gain of interest, but their estimation is a challenging task. Indeed, the initial definition of the Borgonovo's indices involves the unconditional and conditional densities of the model output, which are unfortunately unknown in practice. Thus, the first proposed methods led to a high computational burden especially since the black box function may be very costly-to-evaluate. The first contribution of this thesis consists in proposing new methodologies for estimating first order Borgonovo importance measures which quantify the influence of the output Y relatively to a scalar input Xi. First, we choose to adopt the reinterpretation of the Borgonovo indices in term of measure of dependence, i.e, as a distance between the joint density of Xi and Y and the product distribution. In addition, we develop an estimation procedure combining an importance sampling procedure and Gaussian kernel approximation of the output density and the joint density. This approach allows the computation of all first order Borgonovo with a low budget simulation, independent to the model dimension. However, the use of Gaussian kernel estimation may provide inaccurate estimates for heavy tail distributions. To overcome this problem, we consider an alternative definition of the Borgonovo indices based on the copula formalism
APA, Harvard, Vancouver, ISO, and other styles
17

Naimi, Foued. "Nouveaux indices de suppression de la lipolyse par l'insuline déterminés lors de l'hyperglycémie provoquée par voie orale : comparaisons avec le clamp euglycémique-hyperinsulinémique et les paramètres métaboliques chez les femmes." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9546.

Full text
Abstract:
Résumé : Une dysrégulation de la lipolyse des tissus adipeux peut conduire à une surexposition des tissus non-adipeux aux acides gras non-estérifiés (AGNE), qui peut mener à un certain degré de lipotoxicité dans ces tissus. La lipotoxicité constitue, par ailleurs, l’une des causes majeures du développement de la résistance à l’insuline et du diabète de type 2. En plus de ses fonctions glucorégulatrices, l’insuline a pour fonction d’inhiber la lipolyse et donc de diminuer les niveaux d’AGNE en circulation, prévenant ainsi la lipotoxicité. Il n’y a pas d’étalon d’or pour mesurer la sensibilité de la lipolyse à l’insuline. Le clamp euglycémique hyperinsulinémique constitue la méthode étalon d’or pour évaluer la sensibilité du glucose à l’insuline mais il est aussi utilisé pour mesurer la suppression de la lipolyse par l’insuline. Par contre, cette méthode est couteuse et laborieuse, et ne peut pas s’appliquer à de grandes populations. Il existe aussi des indices pour estimer la fonction antilipolytique de l’insuline dérivés de l’hyperglycémie provoquée par voie orale (HGPO), un test moins dispendieux et plus simple à effectuer à grande échelle. Cette étude vise donc à : 1) Étudier la relation entre les indices de suppressibilité des AGNE par l’insuline dérivés du clamp et ceux dérivés de l’HGPO; et 2) Déterminer laquelle de ces mesures corrèle le mieux avec les facteurs connus comme étant reliés à la dysfonction adipeuse : paramètres anthropométriques et indices de dysfonction métabolique. Les résultats montrent que dans le groupe de sujets étudiés (n=29 femmes, 15 témoins saines et 14 femmes avec résistance à l’insuline car atteintes du syndrome des ovaires polykystiques), certains indices de sensibilité à l’insuline pour la lipolyse dérivés de l’HGPO corrèlent bien avec ceux dérivés du clamp euglycémique hyperinsulinémique. Parmi ces indices, celui qui corrèle le mieux avec les indices du clamp et les paramètres anthropométriques et de dysfonction adipeuse est le T50[indice inférieur AGNE] (temps nécessaire pour diminuer de 50% le taux de base – à jeun – des AGNE). Nos résultats suggèrent donc que l’HGPO, facile à réaliser, peut être utilisée pour évaluer la sensibilité de la lipolyse à l’insuline. Nous pensons que la lipo-résistance à l’insuline peut être facilement quantifiée en clinique humaine.
Abstract : It has been shown that a dysfunctional regulation of adipose-tissue lipolysis could conduct to non-adipose tissues overexpos ure to non exterified fatty acids (NEFA), leading to lipotoxicity. Lipotoxicity is considered as a key factor in the development of insulin resistance and type 2 diabetes. Insulin regulates glucose metabolism but also NEFA storage and release. To our knowledge, there is no gold standard for evaluating insulin sensitivity for lipolysis. The gold standard to measure insulin sensitivity for glucose is the euglycemic-hyperinsulinemic clamp. This method is simple to interpret because it achieves static levels of metabolic parameters at the end of each step of the clamp. The major limit of the clamp is that it is time-consuming, expensive and cannot be used on large population. On the other hand, the oral glucose tolerance test (OGTT) consists in a dynamic test also used to estimate insulin mediated glucose disappearance after ingestion of 75 g of glucose. Since the OGTT is easier to use, less expensive and can be suggested in large cohort studies, its potential use has been suggested to estimate insulin sensitivity for lipolysis, as well. T his work is the first to validate the use of simple indices derived from OGTT to estimate insulin sensitivity for lipolysis against the euglycemic clamp and adipose-tissue dysfunction in women. The results of this study clearly show in a group of 29 women (15 normal and 14 with polycystic ovary syndrome, who are used to increase the range of insulin resistance) that T50[subscript NEFA] (time to suppress 50% of NEFA baseline levels) during OGTT is the best index associated with glucose insulin clamp indices and clinical markers related to adipose tissue dysfunction and metabolic parameters. T50[subscript NEFA] (OGTT) was also better associated with central adiposity and metabolic parameters than clamp-derived indices. Since the OGTT is much easier to perform and is less expensive than the clamp technique, the use of OGTT to calculate T50[subscript NEFA] seems to be a valid method to assess antilipolytic action of insulin in large cohorts.
APA, Harvard, Vancouver, ISO, and other styles
18

Nzang, Essono Francine. "Approche géomatique de la variabilité spatio-temporelle de la contamination microbienne des eaux récréatives." Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/10211.

Full text
Abstract:
L’objectif général de cette thèse est de caractériser la dynamique des transferts des bactéries fécales à l’aide d’une modélisation spatio-temporelle, à l’échelle du bassin versant (BV) dans une région agricole et à l’échelle événementielle. Ce projet vise à mieux comprendre l'influence des processus hydrologiques, les facteurs environnementaux et temporels impliqués dans l’explication des épisodes de contamination microbienne des eaux récréatives. Premièrement, un modèle bayésien hiérarchique a été développé pour quantifier et cartographier les niveaux de probabilité des eaux à être contaminées par des effluents agricoles, sur la base des données spectrales et des variables géomorphologiques. Par cette méthode, nous avons pu calculer les relations pondérées entre les concentrations d’Escherichia coli et la distribution de l’ensemble des paramètres agro-pédo-climatiques qui régissent sa propagation. Les résultats ont montré que le modèle bayésien développé peut être utilisé en mode prédictif de la contamination microbienne des eaux récréatives. Ce modèle avec un taux de succès de 71 % a mis en évidence le rôle significatif joué par la pluie qui est la cause principale du transport des polluants. Deuxièmement, le modèle bayésien a fait l’objet d'une analyse de sensibilité liée aux paramètres spatiaux, en utilisant les indices de Sobol. Cette démarche a permis (i) la quantification des incertitudes sur les variables pédologiques, d’occupation du sol et de la distance et (2) la propagation de ces incertitudes dans le modèle probabiliste c'est-à-dire le calcul de l’erreur induite dans la sortie par les incertitudes des entrées spatiales. Enfin, une analyse de sensibilité des simulations aux différentes sources d’incertitude a été effectuée pour évaluer la contribution de chaque facteur sur l’incertitude globale en prenant en compte leurs interactions. Il apparaît que sur l’ensemble des scénarios, l’incertitude de la contamination microbienne dépend directement de la variabilité des sols argileux. Les indices de premier ordre de l’analyse de Sobol ont montré que parmi les facteurs les plus susceptibles d’influer la contamination microbienne, la superficie des zones agricoles est le premier facteur important dans l'évaluation du taux de coliformes. C’est donc sur ce paramètre que l’attention devra se porter dans le contexte de prévision d'une contamination microbienne. Ensuite, la deuxième variable la plus importante est la zone urbaine avec des parts de sensibilité d’environ 30 %. Par ailleurs, les estimations des indices totaux sont meilleures que celles des indices de premier ordre, ce qui signifie que l’impact des interactions paramétriques est nettement significatif pour la modélisation de la contamination microbienne Enfin, troisièmement, nous proposons de mettre en œuvre une modélisation de la variabilité temporelle de la contamination microbiologique du bassin versant du lac Massawippi, à partir du modèle AVSWAT. Il s'agit d'une modélisation couplant les composantes temporelles et spatiales qui caractérisent la dynamique des coliformes. La synthèse des principaux résultats démontrent que les concentrations de coliformes dans différents sous-bassins versants se révèlent influencées par l’intensité de pluie. La recherche a également permis de conclure que les meilleures performances en calage sont obtenues au niveau de l'optimisation multi-objective. Les résultats de ces travaux ouvrent des perspectives encourageantes sur le plan opérationnel en fournissant une compréhension globale de la dynamique de la contamination microbienne des eaux de surface.
Abstract : The aim of this study was to predict water faecal contamination from a bayesian probabilistic model, on a watershed scale in a farming area and on a factual scale. This project aims to better understand the influence of hydrological, environmental and temporal factors involved in the explanation of microbial contamination episodes of recreational waters. First, a bayesian probabilistic model: Weight of Evidence was developed to identify and map the probability of water levels to be contaminated by agricultural effluents, on the basis of spectrals data and geomorphologic variables. By this method, we were able to calculate weighted relationships between concentrations of Escherichia coli and distribution of key agronomic, pedologic and climatic parameters that influence the spread of these microorganisms. The results showed that the Bayesian model that was developed can be used as a prediction of microbial contamination of recreational waters. This model, with a success rate of 71%, highlighted the significant role played by the rain, which is the main cause of pollution transport. Secondly, the Bayesian probabilistic model has been the subject of a sensitivity analysis related to spatial parameters, using Sobol indications. This allowed (1) quantification of uncertainties on soil variables, land use and distance and (2) the spread of these uncertainties in the probabilistic model that is to say, the calculation of induced error in the output by the uncertainties of spatial inputs. Lastly, simulation sensitivity analysis to the various sources of uncertainty was performed to assess the contribution of each factor on the overall uncertainty taking into account their interactions. It appears that of all the scenarios, the uncertainty of the microbial contamination is directly dependent on the variability of clay soils. Sobol prime indications analysis showed that among the most likely to influence the microbial factors, the area of farmland is the first important factor in assessing the coliforms. Importance must be given on this parameter in the context of preparation for microbial contamination. Then, the second most important variable is the urban area with sensitivity shares of approximately 30%. Furthermore, estimates of the total indications are better than those of the first order, which means that the impact of parametric interaction is clearly significant for the modeling of microbial contamination. Thirdly, we propose to implement a temporal variability model of microbiological contamination on the watershed of Lake Massawippi, based on the AVSWAT model. This is a model that couples the temporal and spatial components that characterize the dynamics of coliforms. The synthesis of the main results shows that concentrations of Escherichia coli in different sub-watersheds are influenced by rain intensity. Research also concluded that best performance is obtained by multi-objective optimization. The results of these studies show the prospective of operationally providing a comprehensive understanding of the dynamics of microbial contamination of surface water.
APA, Harvard, Vancouver, ISO, and other styles
19

Solís, Maikol. "Conditional covariance estimation for dimension reduction and sensivity analysis." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2354/.

Full text
Abstract:
Cette thèse se concentre autour du problème de l'estimation de matrices de covariance conditionnelles et ses applications, en particulier sur la réduction de dimension et l'analyse de sensibilités. Dans le Chapitre 2 nous plaçons dans un modèle d'observation de type régression en grande dimension pour lequel nous souhaitons utiliser une méthodologie de type régression inverse par tranches. L'utilisation d'un opérateur fonctionnel, nous permettra d'appliquer une décomposition de Taylor autour d'un estimateur préliminaire de la densité jointe. Nous prouverons deux choses : notre estimateur est asymptoticalement normale avec une variance que dépend de la partie linéaire, et cette variance est efficace selon le point de vue de Cramér-Rao. Dans le Chapitre 3, nous étudions l'estimation de matrices de covariance conditionnelle dans un premier temps coordonnée par coordonnée, lesquelles dépendent de la densité jointe inconnue que nous remplacerons par un estimateur à noyaux. Nous trouverons que l'erreur quadratique moyenne de l'estimateur converge à une vitesse paramétrique si la distribution jointe appartient à une classe de fonctions lisses. Sinon, nous aurons une vitesse plus lent en fonction de la régularité de la densité de la densité jointe. Pour l'estimateur de la matrice complète, nous allons appliquer une transformation de régularisation de type "banding". Finalement, dans le Chapitre 4, nous allons utiliser nos résultats pour estimer des indices de Sobol utilisés en analyses de sensibilité. Ces indices mesurent l'influence des entrées par rapport a la sortie dans modèles complexes. L'avantage de notre implémentation est d'estimer les indices de Sobol sans l'utilisation de coûteuses méthodes de type Monte-Carlo. Certaines illustrations sont présentées dans le chapitre pour montrer les capacités de notre estimateur
This thesis will be focused in the estimation of conditional covariance matrices and their applications, in particular, in dimension reduction and sensitivity analyses. In Chapter 2, we are in a context of high-dimensional nonlinear regression. The main objective is to use the sliced inverse regression methodology. Using a functional operator depending on the joint density, we apply a Taylor decomposition around a preliminary estimator. We will prove two things: our estimator is asymptotical normal with variance depending only the linear part, and this variance is efficient from the Cramér-Rao point of view. In the Chapter 3, we study the estimation of conditional covariance matrices, first coordinate-wise where those parameters depend on the unknown joint density which we will replace it by a kernel estimator. We prove that the mean squared error of the nonparametric estimator has a parametric rate of convergence if the joint distribution belongs to some class of smooth functions. Otherwise, we get a slower rate depending on the regularity of the model. For the estimator of the whole matrix estimator, we will apply a regularization of type "banding". Finally, in Chapter 4, we apply our results to estimate the Sobol or sensitivity indices. These indices measure the influence of the inputs with respect to the output in complex models. The advantage of our implementation is that we can estimate the Sobol indices without use computing expensive Monte-Carlo methods. Some illustrations are presented in the chapter showing the capabilities of our estimator
APA, Harvard, Vancouver, ISO, and other styles
20

Abily, Morgan. "Modélisation hydraulique à surface libre haute-résolution : utilisation de données topographiques haute-résolution pour la caractérisation du risque inondation en milieux urbains et industriels." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4121/document.

Full text
Abstract:
Pour l'évaluation du risque inondation, l’emploi de modèles numériques 2D d’hydraulique à surface libre reposant sur la résolution des équations de Saint-Venant est courant. Ces modèles nécessitent entre autre la description de la topographie de la zone d’étude. Sur des secteurs urbains denses ou des sites industriels, cette topographie complexe peut être appréhendée de plus en plus finement via des technologies dédiées telles que le LiDAR et la photogrammétrie. Les Modèles Numériques d'Elévation Haute Résolution (HR MNE) générés à partir de ces technologies, deviennent employés dans les études d’évaluation du risque inondation. Cette thèse étudie les possibilités, les avantages et les limites, liées à l'intégration des données topographiques HR en modélisation 2D du risque inondation en milieux urbains et industriels. Des modélisations HR de scénarios d'inondation d'origines pluviale ou fluviale sont testés en utilisant des HR MNE crées à partir de données LiDAR et photo-interprétées. Des codes de calculs (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D) offrant des moyens différent d'intégration de la donnée HR et basés sur des méthodes numériques variées sont utilisés. La valeur ajoutée de l'intégration des éléments fins du sur-sol impactant les écoulements est démontrée. Des outils pour appréhender les incertitudes liées à l'emploi de ces données HR sont développés et une analyse globale de sensibilité est effectuée. Les cartes d'indices de sensibilité (Sobol) produites soulignent et quantifient l'importance des choix du modélisateur dans la variance des résultats des modèles d'inondation HR ainsi que la variabilité spatiale de l'impact des paramètres incertains testés
High Resolution (infra-metric) topographic data, including LiDAR photo-interpreted datasets, are becoming commonly available at large range of spatial extent, such as municipality or industrial site scale. These datasets are promising for High-Resolution (HR) Digital Elevation Model (DEM) generation, allowing inclusion of fine aboveground structures that influence overland flow hydrodynamic in urban environment. DEMs are one key input data in Hydroinformatics to perform free surface hydraulic modelling using standard 2D Shallow Water Equations (SWEs) based numerical codes. Nonetheless, several categories of technical and numerical challenges arise from this type of data use with standard 2D SWEs numerical codes. Objective of this thesis is to tackle possibilities, advantages and limits of High-Resolution (HR) topographic data use within standard categories of 2D hydraulic numerical modelling tools for flood hazard assessment purpose. Concepts of HR topographic data and 2D SWE based numerical modelling are recalled. HR modelling is performed for : (i) intense runoff and (ii) river flood event using LiDAR and photo-interpreted datasets. Tests to encompass HR surface elevation data in standard modelling tools ranges from industrial site scale to a megacity district scale (Nice, France). Several standard 2D SWEs based codes are tested (Mike 21, Mike 21 FM, TELEMAC-2D, FullSWOF_2D). Tools and methods for assessing uncertainties aspects with 2D SWE based models are developed to perform a spatial Global Sensitivity Analysis related to HR topographic data use. Results show the importance of modeller choices regarding ways to integrate the HR topographic information in models
APA, Harvard, Vancouver, ISO, and other styles
21

Novák, Lukáš. "Pravděpodobnostní modelování smykové únosnosti předpjatých betonových nosníků: Citlivostní analýza a semi-pravděpodobnostní metody návrhu." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-372051.

Full text
Abstract:
Diploma thesis is focused on advanced reliability analysis of structures solved by non--linear finite element analysis. Specifically, semi--probabilistic methods for determination of design value of resistance, sensitivity analysis and surrogate model created by polynomial chaos expansion are described in the diploma thesis. Described methods are applied on prestressed reinforced concrete roof girder.
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, QiongLi. "Sensitivity Analysis for Functional Structural Plant Modelling." Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00719935.

Full text
Abstract:
Global sensitivity analysis has a key role to play in the design and parameterization of functional-structural plant growth models (FSPM) which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). Models of this type generally describe many interacting processes, count a large number of parameters, and their computational cost can be important. The general objective of this thesis is to develop a proper methodology for the sensitivity analysis of functional structural plant models and to investigate how sensitivity analysis can help for the design and parameterization of such models as well as providing insights for the understanding of underlying biological processes. Our contribution can be summarized in two parts: from the methodology point of view, we first improved the performance of the existing Sobol's method to compute sensitivity indices in terms of computational efficiency, with a better control of the estimation error for Monte Carlo simulation, and we also designed a proper strategy of analysis for complex biophysical systems; from the application point of view, we implemented our strategy for 3 FSPMs with different levels of complexity, and analyzed the results from different perspectives (model parameterization, model diagnosis).
APA, Harvard, Vancouver, ISO, and other styles
23

Riahi, Hassen. "Analyse de structures à dimension stochastique élevée : application aux toitures bois sous sollicitation sismique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881187.

Full text
Abstract:
Le problème de la dimension stochastique élevée est récurrent dans les analyses probabilistes des structures. Il correspond à l'augmentation exponentielle du nombre d'évaluations du modèle mécanique lorsque le nombre de paramètres incertains est élevé. Afin de pallier cette difficulté, nous avons proposé dans cette thèse, une approche à deux étapes. La première consiste à déterminer la dimension stochastique efficace, en se basant sur une hiérarchisation des paramètres incertains en utilisant les méthodes de criblage. Une fois les paramètres prépondérants sur la variabilité de la réponse du modèle identifiés, ils sont modélisés par des variables aléatoires et le reste des paramètres est fixé à leurs valeurs moyennes respectives, dans le calcul stochastique proprement dit. Cette tâche fut la deuxième étape de l'approche proposée, dans laquelle la méthode de décomposition de la dimension est utilisée pour caractériser l'aléa de la réponse du modèle, par l'estimation des moments statistiques et la construction de la densité de probabilité. Cette approche permet d'économiser jusqu'à 90% du temps de calcul demandé par les méthodes de calcul stochastique classiques. Elle est ensuite utilisée dans l'évaluation de l'intégrité d'une toiture à ossature bois d'une habitation individuelle installée sur un site d'aléa sismique fort. Dans ce contexte, l'analyse du comportement de la structure est basée sur un modèle éléments finis, dans lequel les assemblages en bois sont modélisés par une loi anisotrope avec hystérésis et l'action sismique est représentée par huit accélérogrammes naturels fournis par le BRGM. Ces accélérogrammes permettent de représenter différents types de sols selon en se référant à la classification de l'Eurocode 8. La défaillance de la toiture est définie par l'atteinte de l'endommagement, enregistré dans les assemblages situés sur les éléments de contreventement et les éléments d'anti-flambement, d'un niveau critique fixé à l'aide des résultats des essais. Des analyses déterministes du modèle éléments finis ont montré que la toiture résiste à l'aléa sismique de la ville du Moule en Guadeloupe. Les analyses probabilistes ont montré que parmi les 134 variables aléatoires représentant l'aléa dans le comportement non linéaire des assemblages, 15 seulement contribuent effectivement à la variabilité de la réponse mécanique ce qui a permis de réduire la dimension stochastique dans le calcul des moments statistiques. En s'appuyant sur les estimations de la moyenne et de l'écart-type on a montré que la variabilité de l'endommagement dans les assemblages situés dans les éléments de contreventement est plus importante que celle de l'endommagement sur les assemblages situés sur les éléments d'anti-flambement. De plus, elle est plus significative pour les signaux les plus nocifs sur la structure.
APA, Harvard, Vancouver, ISO, and other styles
24

Baidya, Suman K. "Trace gas and particulate matter emissions from road transportation in India quantification of current and future levels, uncertainties and sensitivity analysis." Berlin mbv, Mensch-und-Buch-Verl, 2008. http://d-nb.info/995878560/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Noto, Raffaella. "Flussi in mezzi porosi a saturazione variabile generati da canali superficiali disperdenti: analisi numerica bidimensionale." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Nel presente studio è stata analizzata l’interazione tra il flusso generato da un canale superficiale disperdente, attraverso il sottosuolo, verso la zona satura, al fine di determinare la distanza di influenza del canale ed individuare le grandezze da cui essa dipende. L’analisi è stata condotta per mezzo di un modello realizzato con il codice VS2DHI che ha permesso la determinazione del grado di saturazione del terreno al raggiungimento della condizione stazionaria. È stata svolta un'analisi di sensitività globale attraverso la tecnica dell’Espansione in Caos Polinomiale (PCE) con l’obiettivo di studiare, tramite la determinazione degli indici di Sobol, come la variazione del livello d’acqua nel canale H e l'incertezza associata alla porosità del terreno, alla conducibilità idraulica a saturazione, e ai parametri di van Genuchten, incidano sulla previsione del modello. Il modello surrogato definito con la PCE è stato ottenuto applicando il Probabilistic Collocation Method in modo da ridurre l’onere computazionale delle simulazioni. Il lavoro di tesi si colloca all'interno di un progetto mirato a stimare il livello della falda ipodermica e le sue variazioni a seguito dell'interazione con correnti superficiali.
APA, Harvard, Vancouver, ISO, and other styles
26

Niang, Ibrahima. "Quantification et méthodes statistiques pour le risque de modèle." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1015/document.

Full text
Abstract:
En finance, le risque de modèle est le risque de pertes financières résultant de l'utilisation de modèles. Il s'agit d'un risque complexe à appréhender qui recouvre plusieurs situations très différentes, et tout particulièrement le risque d'estimation (on utilise en général dans un modèle un paramètre estimé) et le risque d'erreur de spécification de modèle (qui consiste à utiliser un modèle inadéquat). Cette thèse s'intéresse d'une part à la quantification du risque de modèle dans la construction de courbes de taux ou de crédit et d'autre part à l'étude de la compatibilité des indices de Sobol avec la théorie des ordres stochastiques. Elle est divisée en trois chapitres. Le Chapitre 1 s'intéresse à l'étude du risque de modèle dans la construction de courbes de taux ou de crédit. Nous analysons en particulier l'incertitude associée à la construction de courbes de taux ou de crédit. Dans ce contexte, nous avons obtenus des bornes de non-arbitrage associées à des courbes de taux ou de défaut implicite parfaitement compatibles avec les cotations des produits de référence associés. Dans le Chapitre 2 de la thèse, nous faisons le lien entre l'analyse de sensibilité globale et la théorie des ordres stochastiques. Nous analysons en particulier comment les indices de Sobol se transforment suite à une augmentation de l'incertitude d'un paramètre au sens de l'ordre stochastique dispersif ou excess wealth. Le Chapitre 3 de la thèse s'intéresse à l'indice de contraste quantile. Nous faisons d'une part le lien entre cet indice et la mesure de risque CTE puis nous analysons, d'autre part, dans quelles mesures une augmentation de l'incertitude d'un paramètre au sens de l'ordre stochastique dispersif ou excess wealth entraine une augmentation de l'indice de contraste quantile. Nous proposons enfin une méthode d'estimation de cet indice. Nous montrons, sous des hypothèses adéquates, que l'estimateur que nous proposons est consistant et asymptotiquement normal
In finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
APA, Harvard, Vancouver, ISO, and other styles
27

Ahmed, Anwar. "COST AND ACCURACY COMPARISONS IN MEDICAL TESTING USING SEQUENTIAL TESTING STRATEGIES." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/103.

Full text
Abstract:
The practice of sequential testing is followed by the evaluation of accuracy, but often not by the evaluation of cost. This research described and compared three sequential testing strategies: believe the negative (BN), believe the positive (BP) and believe the extreme (BE), the latter being a less-examined strategy. All three strategies were used to combine results of two medical tests to diagnose a disease or medical condition. Descriptions of these strategies were provided in terms of accuracy (using the maximum receiver operating curve or MROC) and cost of testing (defined as the proportion of subjects who need 2 tests to diagnose disease), with the goal to minimize the number of tests needed for each subject while maintaining test accuracy. It was shown that the cost of the test sequence could be reduced without sacrificing accuracy beyond an acceptable range by setting an acceptable tolerance (q) on maximum test sensitivity. This research introduced a newly-developed ROC curve reflecting this reduced sensitivity and cost of testing called the Minimum Cost Maximum Receiver Operating Characteristic (MCMROC) curve. Within these strategies, four different parameters that could influence the performance of the combined tests were examined: the area under the curve (AUC) of each individual test, the ratio of standard deviations (b) from assumed underlying disease and non-disease populations, correlation (rho) between underlying disease populations, and disease prevalence. The following patterns were noted: Under all parameter settings, the MROC curve of the BE strategy never performed worse than the BN and BP strategies, and it most frequently had the lowest cost. The parameters tended to have less of an effect on the MROC and MCMROC curves than they had on the cost curves, which were affected greatly. The AUC values and the ratio of standard deviations both had a greater effect on cost curves, MROC curves, and MCMROC curves than prevalence and correlation. The use of BMI and plasma glucose concentration to diagnose diabetes in Pima Indians was presented as an example of a real-world application of these strategies. It was found that the BN and BE strategies were the most consistently accurate and least expensive choice.
APA, Harvard, Vancouver, ISO, and other styles
28

Mannschatz, Theresa. "Site evaluation approach for reforestations based on SVAT water balance modeling considering data scarcity and uncertainty analysis of model input parameters from geophysical data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-175309.

Full text
Abstract:
Extensive deforestations, particularly in the (sub)tropics, have led to intense soil degradation and erosion with concomitant reduction in soil fertility. Reforestations or plantations on those degraded sites may provide effective measures to mitigate further soil degradation and erosion, and can lead to improved soil quality. However, a change in land use from, e.g., grassland to forest may have a crucial impact on water balance. This may affect water availability even under humid tropical climate conditions where water is normally not a limiting factor. In this context, it should also be considered that according to climate change projections rainfall may decrease in some of these regions. To mitigate climate change related problems (e.g. increases in erosion and drought), reforestations are often carried out. Unfortunately, those measures are seldom completely successful, because the environmental conditions and the plant specific requirements are not appropriately taken into account. This is often due to data-scarcity and limited financial resources in tropical regions. For this reason, innovative approaches are required that are able to measure environmental conditions quasi-continuously in a cost-effective manner. Simultaneously, reforestation measures should be accompanied by monitoring in order to evaluate reforestation success and to mitigate, or at least to reduce, potential problems associated with reforestation (e.g. water scarcity). To avoid reforestation failure and negative implications on ecosystem services, it is crucial to get insights into the water balance of the actual ecosystem, and potential changes resulting from reforestation. The identification and prediction of water balance changes as a result of reforestation under climate change requires the consideration of the complex feedback system of processes in the soil-vegetation-atmosphere continuum. Models that account for those feedback system are Soil-Vegetation-Atmosphere-Transfer (SVAT) models. For the before-mentioned reasons, this study targeted two main objectives: (i) to develop and test a method combination for site evaluation under data scarcity (i.e. study requirements) (Part I) and (ii) to investigate the consequences of prediction uncertainty of the SVAT model input parameters, which were derived using geophysical methods, on SVAT modeling (Part II). A water balance modeling approach was set at the center of the site evaluation approach. This study used the one-dimensional CoupModel, which is a SVAT model. CoupModel requires detailed spatial soil information for (i) model parameterization, (ii) upscaling of model results and accounting for local to regional-scale soil heterogeneity, and (iii) monitoring of changes in soil properties and plant characteristics over time. Since traditional approaches to soil and vegetation sampling and monitoring are time consuming and expensive (and therefore often limited to point information), geophysical methods were used to overcome this spatial limitation. For this reason, vis-NIR spectroscopy (visible to near-infrared wavelength range) was applied for the measurement of soil properties (physical and chemical), and remote sensing to derive vegetation characteristics (i.e. leaf area index (LAI)). Since the estimated soil properties (mainly texture) could be used to parameterize a SVAT model, this study investigated the whole processing chain and related prediction uncertainty of soil texture and LAI, and their impact on CoupModel water balance prediction uncertainty. A greenhouse experiment with bamboo plants was carried out to determine plant-physiological characteristics needed for CoupModel parameterization. Geoelectrics was used to investigate soil layering, with the intent of determining site-representative soil profiles for model parameterization. Soil structure was investigated using image analysis techniques that allow the quantitative assessment and comparability of structural features. In order to meet the requirements of the selected study approach, the developed methodology was applied and tested for a site in NE-Brazil (which has low data availability) with a bamboo plantation as the test site and a secondary forest as the reference (reference site). Nevertheless, the objective of the thesis was not the concrete modeling of the case study site, but rather the evaluation of the suitability of the selected methods to evaluate sites for reforestations and to monitor their influence on the water balance as well as soil properties. The results (Part III) highlight that one needs to be aware of the measurement uncertainty related to SVAT model input parameters, so for instance the uncertainty of model input parameters such as soil texture and leaf area index influences meaningfully the simulated model water balance output. Furthermore, this work indicates that vis-NIR spectroscopy is a fast and cost-efficient method for soil measurement, mapping, and monitoring of soil physical (texture) and chemical (N, TOC, TIC, TC) properties, where the quality of soil prediction depends on the instrument (e.g. sensor resolution), the sample properties (i.e. chemistry), and the site characteristics (i.e. climate). Additionally, also the sensitivity of the CoupModel with respect to texture prediction uncertainty with respect to surface runoff, transpiration, evaporation, evapotranspiration, and soil water content depends on site conditions (i.e. climate and soil type). For this reason, it is recommended that SVAT model sensitivity analysis be carried out prior to field spectroscopic measurements to account for site specific climate and soil conditions. Nevertheless, mapping of the soil properties estimated via spectroscopy using kriging resulted in poor interpolation (i.e. weak variograms) results as a consequence of a summation of uncertainty arising from the method of field measurement to mapping (i.e. spectroscopic soil prediction, kriging error) and site-specific ‘small-scale’ heterogeneity. The selected soil evaluation method (vis-NIR spectroscopy, structure comparison using image analysis, traditional laboratory analysis) showed that there are significant differences between the bamboo soil and the adjacent secondary forest soil established on the same soil type (Vertisol). Reflecting on the major study results, it can be stated that the selected method combination is a way forward to a more detailed and efficient way to evaluate the suitability of a specific site for reforestation. The results of this study provide insights into where and when during soil and vegetation measurements a high measurement accuracy is required to minimize uncertainties in SVAT modeling
Umfangreiche Abholzungen, besonders in den (Sub-)Tropen, habe zu intensiver Bodendegradierung und Erosion mit einhergehendem Verlust der Bodenfruchtbarkeit geführt. Eine wirksame Maßnahme zur Vermeidung fortschreitender Bodendegradierung und Erosion sind Aufforstungen auf diesen Flächen, die bisweilen zu einer verbesserten Bodenqualität führen können. Eine Umwandlung von Grünland zu Wald kann jedoch einen entscheidenden Einfluss auf den Wasserhaushalt haben. Selbst unter humid-tropischen Klimabedingungen, wo Wasser in der Regel kein begrenzender Faktor ist, können sich Aufforstungen negativ auf die Wasserverfügbarkeit auswirken. In diesem Zusammenhang muss auch berücksichtigt werden, dass Klimamodelle eine Abnahme der Niederschläge in einigen dieser Regionen prognostizieren. Um die Probleme, die mit dem Klimawandel in Verbindung stehen zu mildern (z.B. Zunahme von Erosion und Dürreperioden), wurden und werden bereits umfangreiche Aufforstungsmaßnahmen durchgeführt. Viele dieser Maßnahmen waren nicht immer umfassend erfolgreich, weil die Umgebungsbedingungen sowie die pflanzenspezifischen Anforderungen nicht angemessen berücksichtigt wurden. Dies liegt häufig an der schlechten Datengrundlage sowie an den in vielen Entwicklungs- und Schwellenländern begrenzter verfügbarer finanzieller Mittel. Aus diesem Grund werden innovative Ansätze benötigt, die in der Lage sind quasi-kontinuierlich und kostengünstig die Standortbedingungen zu erfassen und zu bewerten. Gleichzeitig sollte eine Überwachung der Wiederaufforstungsmaßnahme erfolgen, um deren Erfolg zu bewerten und potentielle negative Effekte (z.B. Wasserknappheit) zu erkennen und diesen entgegenzuwirken bzw. reduzieren zu können. Um zu vermeiden, dass Wiederaufforstungen fehlschlagen oder negative Auswirkungen auf die Ökosystemdienstleistungen haben, ist es entscheidend, Kenntnisse vom tatsächlichen Wasserhaushalt des Ökosystems zu erhalten und Änderungen des Wasserhaushalts durch Wiederaufforstungen vorhersagen zu können. Die Ermittlung und Vorhersage von Wasserhaushaltsänderungen infolge einer Aufforstung unter Berücksichtigung des Klimawandels erfordert die Berücksichtigung komplex-verzahnter Rückkopplungsprozesse im Boden-Vegetations-Atmosphären Kontinuum. Hydrologische Modelle, die explizit den Einfluss der Vegetation auf den Wasserhaushalt untersuchen sind Soil-Vegetation-Atmosphere-Transfer (SVAT) Modelle. Die vorliegende Studie verfolgte zwei Hauptziele: (i) die Entwicklung und Erprobung einer Methodenkombination zur Standortbewertung unter Datenknappheit (d.h. Grundanforderung des Ansatzes) (Teil I) und (ii) die Untersuchung des Einflusses der mit geophysikalischen Methoden vorhergesagten SVAT-Modeleingangsparameter (d.h. Vorhersageunsicherheiten) auf die Modellierung (Teil II). Eine Wasserhaushaltsmodellierung wurde in den Mittelpunkt der Methodenkombination gesetzt. In dieser Studie wurde das 1D SVAT Model CoupModel verwendet. CoupModel benötigen detaillierte räumliche Bodeninformationen (i) zur Modellparametrisierung, (ii) zum Hochskalierung von Modellergebnissen unter Berücksichtigung lokaler und regionaler Bodenheterogenität, und (iii) zur Beobachtung (Monitoring) der zeitlichen Veränderungen des Bodens und der Vegetation. Traditionelle Ansätze zur Messung von Boden- und Vegetationseigenschaften und deren Monitoring sind jedoch zeitaufwendig, teuer und beschränken sich daher oft auf Punktinformationen. Ein vielversprechender Ansatz zur Überwindung der räumlichen Einschränkung sind die Nutzung geophysikalischer Methoden. Aus diesem Grund wurden vis-NIR Spektroskopie (sichtbarer bis nah-infraroter Wellenlängenbereich) zur quasi-kontinuierlichen Messung von physikalischer und chemischer Bodeneigenschaften und Satelliten-basierte Fernerkundung zur Ableitung von Vegetationscharakteristika (d.h. Blattflächenindex (BFI)) eingesetzt. Da die mit geophysikalisch hergeleiteten Bodenparameter (hier Bodenart) und Pflanzenparameter zur Parametrisierung eines SVAT Models verwendet werden können, wurde die gesamte Prozessierungskette und die damit verbundenen Unsicherheiten und deren potentiellen Auswirkungen auf die Wasserhaushaltsmodellierung mit CoupModel untersucht. Ein Gewächshausexperiment mit Bambuspflanzen wurde durchgeführt, um die zur CoupModel Parametrisierung notwendigen pflanzenphysio- logischen Parameter zu bestimmen. Geoelektrik wurde eingesetzt, um die Bodenschichtung der Untersuchungsfläche zu untersuchen und ein repräsentatives Bodenprofil zur Modellierung zu definieren. Die Bodenstruktur wurde unter Verwendung einer Bildanalysetechnik ausgewertet, die die qualitativen Bewertung und Vergleichbarkeit struktureller Merkmale ermöglicht. Um den Anforderungen des gewählten Standortbewertungsansatzes gerecht zu werden, wurde die Methodik auf einem Standort mit einer Bambusplantage und einem Sekundärregenwald (als Referenzfläche) in NO-Brasilien (d.h. geringe Datenverfügbarkeit) entwickelt und getestet. Das Ziel dieser Arbeit war jedoch nicht die Modellierung dieses konkreten Standortes, sondern die Bewertung der Eignung des gewählten Methodenansatzes zur Standortbewertung für Aufforstungen und deren zeitliche Beobachtung, als auch die Bewertung des Einfluss von Aufforstungen auf den Wasserhaushalt und die Bodenqualität. Die Ergebnisse (Teil III) verdeutlichen, dass es notwendig ist, sich den potentiellen Einfluss der Messunsicherheiten der SVAT Modelleingangsparameter auf die Modellierung bewusst zu sein. Beispielsweise zeigte sich, dass die Vorhersageunsicherheiten der Bodentextur und des BFI einen bedeutenden Einfluss auf die Wasserhaushaltsmodellierung mit CoupModel hatte. Die Arbeit zeigt weiterhin, dass vis-NIR Spektroskopie zur schnellen und kostengünstigen Messung, Kartierung und Überwachung boden-physikalischer (Bodenart) und -chemischer (N, TOC, TIC, TC) Eigenschaften geeignet ist. Die Qualität der Bodenvorhersage hängt vom Instrument (z.B. Sensorauflösung), den Probeneigenschaften (z.B. chemische Zusammensetzung) und den Standortmerkmalen (z.B. Klima) ab. Die Sensitivitätsanalyse mit CoupModel zeigte, dass der Einfluss der spektralen Bodenartvorhersageunsicherheiten auf den mit CoupModel simulierten Oberflächenabfluss, Evaporation, Transpiration und Evapotranspiration ebenfalls von den Standortbedingungen (z.B. Klima, Bodentyp) abhängt. Aus diesem Grund wird empfohlen eine SVAT Model Sensitivitätsanalyse vor der spektroskopischen Feldmessung von Bodenparametern durchzuführen, um die Standort-spezifischen Boden- und Klimabedingungen angemessen zu berücksichtigen. Die Anfertigung einer Bodenkarte unter Verwendung von Kriging führte zu schlechten Interpolationsergebnissen in Folge der Aufsummierung von Mess- und Schätzunsicherheiten (d.h. bei spektroskopischer Feldmessung, Kriging-Fehler) und der kleinskaligen Bodenheterogenität. Anhand des gewählten Bodenbewertungsansatzes (vis-NIR Spektroskopie, Strukturvergleich mit Bildanalysetechnik, traditionelle Laboranalysen) konnte gezeigt werden, dass es bei gleichem Bodentyp (Vertisol) signifikante Unterschiede zwischen den Böden unter Bambus und Sekundärwald gibt. Anhand der wichtigsten Ergebnisse kann festgehalten werden, dass die gewählte Methodenkombination zur detailreicheren und effizienteren Standortuntersuchung und -bewertung für Aufforstungen beitragen kann. Die Ergebnisse dieser Studie geben einen Einblick darauf, wo und wann bei Boden- und Vegetationsmessungen eine besonders hohe Messgenauigkeit erforderlich ist, um Unsicherheiten bei der SVAT Modellierung zu minimieren
Extensos desmatamentos que estão sendo feitos especialmente nos trópicos e sub-trópicos resultam em uma intensa degradação do solo e num aumento da erosão gerando assim uma redução na sua fertilidade. Reflorestamentos ou plantações nestas áreas degradadas podem ser medidas eficazes para atenuar esses problemas e levar a uma melhoria da qualidade do mesmo. No entanto, uma mudança no uso da terra, por exemplo de pastagem para floresta pode ter um impacto crucial no balanço hídrico e isso pode afetar a disponibilidade de água, mesmo sob condições de clima tropical úmido, onde a água normalmente não é um fator limitante. Devemos levar também em consideração que de acordo com projeções de mudanças climáticas, as precipitações em algumas dessas regiões também diminuirão agravando assim, ainda mais o quadro apresentado. Para mitigar esses problemas relacionados com as alterações climáticas, reflorestamentos são frequentemente realizados mas raramente são bem-sucedidos, pois condições ambientais como os requisitos específicos de cada espécie de planta, não são devidamente levados em consideração. Isso é muitas vezes devido, não só pela falta de dados, como também por recursos financeiros limitados, que são problemas comuns em regiões tropicais. Por esses motivos, são necessárias abordagens inovadoras que devam ser capazes de medir as condições ambientais quase continuamente e de maneira rentável. Simultaneamente com o reflorestamento, deve ser feita uma monitoração a fim de avaliar o sucesso da atividade e para prevenir, ou pelo menos, reduzir os problemas potenciais associados com o mesmo (por exemplo, a escassez de água). Para se evitar falhas e reduzir implicações negativas sobre os ecossistemas, é crucial obter percepções sobre o real balanço hídrico e as mudanças que seriam geradas por esse reflorestamento. Por este motivo, esta tese teve como objetivo desenvolver e testar uma combinação de métodos para avaliação de áreas adequadas para reflorestamento. Com esse intuito, foi colocada no centro da abordagem de avaliação a modelagem do balanço hídrico local, que permite a identificação e estimação de possíveis alterações causadas pelo reflorestamento sob mudança climática considerando o sistema complexo de realimentação e a interação de processos do continuum solo-vegetação-atmosfera. Esses modelos hidrológicos que investigam explicitamente a influência da vegetação no equilíbrio da água são conhecidos como modelos Solo-Vegetação-Atmosfera (SVAT). Esta pesquisa focou em dois objetivos principais: (i) desenvolvimento e teste de uma combinação de métodos para avaliação de áreas que sofrem com a escassez de dados (pré-requisito do estudo) (Parte I), e (ii) a investigação das consequências da incerteza nos parâmetros de entrada do modelo SVAT, provenientes de dados geofísicos, para modelagem hídrica (Parte II). A fim de satisfazer esses objetivos, o estudo foi feito no nordeste brasileiro,por representar uma área de grande escassez de dados, utilizando como base uma plantação de bambu e uma área de floresta secundária. Uma modelagem do balanço hídrico foi disposta no centro da metodologia para a avaliação de áreas. Este estudo utilizou o CoupModel que é um modelo SVAT unidimensional e que requer informações espaciais detalhadas do solo para (i) a parametrização do modelo, (ii) aumento da escala dos resultados da modelagem, considerando a heterogeneidade do solo de escala local para regional e (iii) o monitoramento de mudanças nas propriedades do solo e características da vegetação ao longo do tempo. Entretanto, as abordagens tradicionais para amostragem de solo e de vegetação e o monitoramento são demorados e caros e portanto muitas vezes limitadas a informações pontuais. Por esta razão, métodos geofísicos como a espectroscopia visível e infravermelho próximo (vis-NIR) e sensoriamento remoto foram utilizados respectivamente para a medição de propriedades físicas e químicas do solo e para derivar as características da vegetação baseado no índice da área foliar (IAF). Como as propriedades estimadas de solo (principalmente a textura) poderiam ser usadas para parametrizar um modelo SVAT, este estudo investigou toda a cadeia de processamento e as incertezas de previsão relacionadas à textura de solo e ao IAF. Além disso explorou o impacto destas incertezas criadas sobre a previsão do balanço hídrico simulado por CoupModel. O método geoelétrico foi aplicado para investigar a estratificação do solo visando a determinação de um perfil representante. Já a sua estrutura foi explorada usando uma técnica de análise de imagens que permitiu a avaliação quantitativa e a comparabilidade dos aspectos estruturais. Um experimento realizado em uma estufa com plantas de bambu (Bambusa vulgaris) foi criado a fim de determinar as caraterísticas fisiológicas desta espécie que posteriormente seriam utilizadas como parâmetros para o CoupModel. Os resultados do estudo (Parte III) destacam que é preciso estar consciente das incertezas relacionadas à medição de parâmetros de entrada do modelo SVAT. A incerteza presente em alguns parâmetros de entrada como por exemplo, textura de solo e o IAF influencia significantemente a modelagem do balanço hídrico. Mesmo assim, esta pesquisa indica que vis-NIR espectroscopia é um método rápido e economicamente viável para medir, mapear e monitorar as propriedades físicas (textura) e químicas (N, TOC, TIC, TC) do solo. A precisão da previsão dessas propriedades depende do tipo de instrumento (por exemplo da resolução do sensor), da propriedade da amostra (a composição química por exemplo) e das características das condições climáticas da área. Os resultados apontam também que a sensitividade do CoupModel à incerteza da previsão da textura de solo em respeito ao escoamento superficial, transpiração, evaporação, evapotranspiração e ao conteúdo de água no solo depende das condições gerais da área (por exemplo condições climáticas e tipo de solo). Por isso, é recomendado realizar uma análise de sensitividade do modelo SVAT prior a medição espectral do solo no campo, para poder considerar adequadamente as condições especificas do área em relação ao clima e ao solo. Além disso, o mapeamento de propriedades de solo previstas pela espectroscopia usando o kriging, resultou em interpolações de baixa qualidade (variogramas fracos) como consequência da acumulação de incertezas surgidas desde a medição no campo até o seu mapeamento (ou seja, previsão do solo via espectroscopia, erro do kriging) e heterogeneidade especifica de uma pequena escala. Osmétodos selecionados para avaliação das áreas (vis-NIR espectroscopia, comparação da estrutura de solo por meio de análise de imagens, análise de laboratório tradicionais) revelou a existência de diferenças significativas entre o solo sob bambu e o sob floresta secundária, apesar de ambas terem sido estabelecidas no mesmo tipo de solo (vertissolo). Refletindo sobre os principais resultados do estudo, pode-se afirmar que a combinação dos métodos escolhidos e aplicados representam uma forma mais detalhada e eficaz de avaliar se uma determinada área é adequada para ser reflorestada. Os resultados apresentados fornecem percepções sobre onde e quando, durante a medição do solo e da vegetação, é necessário se ter uma precisão mais alta a fim de minimizar incertezas potenciais na modelagem com o modelo SVAT
APA, Harvard, Vancouver, ISO, and other styles
29

HU, ZHENG-TAO, and 胡正濤. "Comparison of the sensitivity and specificity of isovolumic phase and ejection phase indices of myocardial intrinsic contractility." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/47859666399414815245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

"The Sensitivity of Confirmatory Factor Analytic Fit Indices to Violations of Factorial Invariance across Latent Classes: A Simulation Study." Doctoral diss., 2011. http://hdl.handle.net/2286/R.I.9267.

Full text
Abstract:
abstract: Although the issue of factorial invariance has received increasing attention in the literature, the focus is typically on differences in factor structure across groups that are directly observed, such as those denoted by sex or ethnicity. While establishing factorial invariance across observed groups is a requisite step in making meaningful cross-group comparisons, failure to attend to possible sources of latent class heterogeneity in the form of class-based differences in factor structure has the potential to compromise conclusions with respect to observed groups and may result in misguided attempts at instrument development and theory refinement. The present studies examined the sensitivity of two widely used confirmatory factor analytic model fit indices, the chi-square test of model fit and RMSEA, to latent class differences in factor structure. Two primary questions were addressed. The first of these concerned the impact of latent class differences in factor loadings with respect to model fit in a single sample reflecting a mixture of classes. The second question concerned the impact of latent class differences in configural structure on tests of factorial invariance across observed groups. The results suggest that both indices are highly insensitive to class-based differences in factor loadings. Across sample size conditions, models with medium (0.2) sized loading differences were rejected by the chi-square test of model fit at rates just slightly higher than the nominal .05 rate of rejection that would be expected under a true null hypothesis. While rates of rejection increased somewhat when the magnitude of loading difference increased, even the largest sample size with equal class representation and the most extreme violations of loading invariance only had rejection rates of approximately 60%. RMSEA was also insensitive to class-based differences in factor loadings, with mean values across conditions suggesting a degree of fit that would generally be regarded as exceptionally good in practice. In contrast, both indices were sensitive to class-based differences in configural structure in the context of a multiple group analysis in which each observed group was a mixture of classes. However, preliminary evidence suggests that this sensitivity may contingent on the form of the cross-group model misspecification.
Dissertation/Thesis
Ph.D. Psychology 2011
APA, Harvard, Vancouver, ISO, and other styles
31

Rodrigues, Diogo Castanhas. "Técnicas de análise de sensibilidade aplicadas ao processo de estampagem de uma taça quadrada." Master's thesis, 2021. http://hdl.handle.net/10316/96098.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia
Com o aumento da competitividade industrial é crucial conhecer bem os processos de conformação de chapas metálicas, de modo a que estes possam ser otimizados e consequentemente reduzir os tempos e custos de produção. Assim, nesta dissertação é aplicada uma análise de sensibilidade ao processo de estampagem de uma taça quadrada, com o objetivo de compreender como as propriedades do material e condições do processo podem influenciar a estampagem. É estudada a influência da variabilidade do módulo de Young, do coeficiente de Poisson, dos coeficientes de anisotropia, dos parâmetros constitutivos da lei de Swift, da espessura inicial da chapa, do coeficiente de atrito e da força do cerra chapas. O objetivo é avaliar a influência da variabilidade destes parâmetros de entrada, na deformação plástica equivalente, na alteração de geometria, na redução de espessura, na força do punção e no retorno elástico. A análise de sensibilidade é realizada por duas técnicas distintas, índices de Sobol e índices PAWN.Antes de ser aplicada a análise de sensibilidade, é importante perceber quais as zonas da taça mais sujeitas à variabilidade dos parâmetros de entrada. Concluiu-se que a aba da taça e a zona próxima ao raio de curvatura da matriz são as zonas mais afetadas, sendo a base da taça a zona menos afetada pela variabilidade nos parâmetros de entrada.Posteriormente, avaliou-se a estabilização dos índices de sensibilidade e concluiu-se que, para a mesma precisão de resultados, os índices PAWN requerem apenas 7,7% a 12,8% das simulações utilizadas para avaliar os índices de Sobol. Da análise de sensibilidade constatou-se que os parâmetros de entrada com mais influência na variabilidade dos parâmetros de saída são: o coeficiente de encruamento, o parâmetro C da lei de Swift e o coeficiente de anisotropia a 90º. Ambos os índices de sensibilidade fornecem resultados semelhantes para todos os parâmetros de saída, exceto para o retorno elástico, para o qual se mostrou que os índices PAWN são mais precisos quando aplicados a um conjunto de dados que segue uma distribuição multimodal.
With the industrial competitiveness increasing day by day, it is crucial to have knowledge about the processes of conformation of metallic plates, in order to optimize that process and consequently decrease time and costs of production. In this dissertation it is applied a sensitivity analysis to the stamping process of a square cup with the aim of understanding how the material properties and the process conditions can influence that same process. The variability influence of the Young’s module, Poisson’s coefficient, anisotropy coefficients, constitutive parameters of Swift’s law, sheet thickness, friction coefficient and blank-holder force is studied. The objective is to evaluate the variability influence of these input parameters, on the variability of equivalent plastic strain, geometry change, thickness reduction, punch force and springback. This sensitivity analysis is made using two techniques: PAWN indices and Sobol indices.Before applying the sensitivity analysis, it is important to understand which zones of the square cup were more affected by inputs variability. It was concluded that the cup flange and the region near the curvature radius of the die are the most affected regions, and the cup base is the region least affected by the variability in the input parameters.Afterwards, the stabilization of the sensitivity indices was evaluated and it was concluded that, for the same results precision, the PAWN indices require only 7.7% to 12.8% of the simulations used to evaluate the Sobol indices. The sensitivity analysis showed that the input parameters with more influence on the variability of the output parameters are: the hardening coefficient, the parameter C of Swift’s law and the anisotropy coefficient at 90º. Both sensitivity indices provide similar results for all outputs, except springback, for which we can conclude that PAWN indices are more accurate than Sobol indices when the data follows a multimodal distribution.
APA, Harvard, Vancouver, ISO, and other styles
32

Ruivo, Miguel António Fernandes Pereira. "Estudo numérico do processo de estampagem de uma taça quadrada: uma análise estocástica." Master's thesis, 2020. http://hdl.handle.net/10316/92101.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia
In the industry, it is becoming more and more important to guarantee the quality of the produced components. To ensure this quality, it is important to define which parameters should be considered important and which ones’ control must be prioritized. With this goal, in this dissertation is presented a numerical study about the influence of the variability of the parameters associated with the mechanical behavior of the material and the process conditions, in the stamping results of a square cup. In this analysis is assumed variability in the elastic properties, Swift law constitutive parameters, anisotropy coefficients, sheet thickness, friction coefficient and in the blank-holder force. The effect of these parameters’ variability is evaluated in the punch force, equivalent plastic strain, thickness reduction, change of geometry and in the springback.Firstly, the quasi-Monte Carlo method was used to evaluate the average and standard deviation values of the simulation outputs, considering the variability of the input parameters. With this analysis, it was possible to conclude that the change of geometry and the springback are the outputs most sensible to the variability of the input parameters; and that the top of the square cup is the zone where the effect of the variability is more significant.Afterwards, a variance-based sensitivity analysis was done. In this analysis, first order Sobol indices were used to identify the input parameters with more effect in the output’s variability, and total Sobol indices were used to estimate the effect of the interactions between the different input parameters in the outputs’ variability. It was concluded that the input parameters with more effect are the Swift law coefficients, n and C, the anisotropy coefficient r_90 and that the effect of the interactions is only relevant for the geometry change. Furthermore, it was verified that the Sobol indices are not suitable to evaluate the influence of the input parameters in the springback, since this output have a multimodal distribution, thus different sensitivity indices must be used.With the goal to reduce the number of necessary simulations to compute the Sobol indices, a Polynomial Chaos Expansion metamodel is utilized. The values obtained for the Sobol indices using this metamodel were compared to the ones obtained with the quasi-Monte Carlo method. It was concluded that the use of the metamodel allowed to significantly reduce the computation time associated to the sensitivity analysis, without compromising the results.
Na indústria, cada vez mais é dada importância à qualidade final dos componentes produzidos. De forma a garantir esta qualidade, é importante definir quais os parâmetros a considerar como sendo importantes e cujo controlo deve ser prioritário. Com isto em vista, nesta dissertação é apresentado um estudo numérico sobre a influência da variabilidade dos parâmetros associados ao comportamento mecânico do material e às condições do processo, nos resultados da estampagem de uma taça quadrada. Nesta análise assume-se variabilidade nas propriedades elásticas, nos parâmetros constitutivos da lei de Swift, nos coeficientes de anisotropia, na espessura da chapa, no coeficiente de atrito e na força de aperto do cerra-chapas. O efeito da variabilidade destes parâmetros é avaliado na força do punção, na deformação plástica equivalente, na redução de espessura, na alteração de geometria e no retorno elástico. Inicialmente, utilizou-se o método de quase-Monte Carlo para avaliar a média e o desvio padrão dos resultados das simulações, tendo em conta a variabilidade nos parâmetros de entrada. Com base nesta análise foi possível concluir que a alteração de geometria e o retorno elástico são as respostas mais sensíveis à variabilidade nos parâmetros de entrada; sendo que a zona da aba da taça é aquela cujo efeito da variabilidade é mais significativo. Posteriormente, realizou-se uma análise de sensibilidade com base na variância. Nesta análise, foram utilizados índices de Sobol de primeira ordem, para identificar os parâmetros de entrada com maior efeito na variabilidade dos resultados, e também índices de Sobol totais, para estimar o efeito das interações entre os vários parâmetros na variabilidade dos resultados. Concluiu-se que os parâmetros mais importantes são o n e o C da lei de Swift, o coeficiente de anisotropia r_90 e que o efeito das interações apenas é relevante para a alteração de geometria. Para além disso, verificou-se que os índices de Sobol não são adequados para avaliar a influência no retorno elástico, uma vez que esta resposta tem uma distribuição multimodal, pelo que devem ser utilizadas outros índices de sensibilidade.Com o objetivo de reduzir o número de simulações necessário à computação dos índices de Sobol, é utilizado o metamodelo Polynomial Chaos Expansion. Os resultados dos índices de Sobol obtidos com o metamodelo foram comparados com os obtidos através do método de quase-Monte Carlo. Concluiu-se que a utilização do metamodelo permitiu reduzir significativamente o tempo de computação associado à análise de sensibilidade, sem prejudicar os resultados da mesma.
APA, Harvard, Vancouver, ISO, and other styles
33

Brito, Miguel Abranches e. Menezes Peixoto de. "Análise de variabilidade na simulação numérica do processo de estampagem de um perfil em U." Master's thesis, 2020. http://hdl.handle.net/10316/92244.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia
Os processos de conformação estão entre os mais utilizados na indústria automóvel, aeronáutica e metalomecânica. A procura por produtos com melhor qualidade e menores custos de produção tem incentivado o interesse crescente por um robusto desenvolvimento e otimização destes processos, que tem em conta a variabilidade inerente aos mesmos. De forma a perceber quais as fontes de variabilidade mais importantes na estampagem de um perfil em U, esta tese apresenta um estudo numérico que visa analisar a influência da variabilidade de onze parâmetros de entrada diferentes (módulo de Young, coeficiente de Poisson, coeficientes de anisotropia, parâmetros da lei de Swift, espessura inicial da chapa, coeficiente de atrito e força do cerra-chapas) nos resultados de conformação (Deformação Plástica Equivalente, Redução de Espessura, Retorno Elástico, Alteração de Geometria e Força do Punção).Inicialmente foi utilizado o método de quasi-Monte Carlo com uma sequência de Sobol para analisar a influência da variabilidade dos parâmetros de entrada na variabilidade dos resultados do processo. Nesta fase, conclui-se que a variabilidade dos parâmetros de entrada afeta todos os resultados do perfil em U, principalmente a Alteração de Geometria. De seguida, através do cálculo dos índices de Sobol, é estimada a influência de cada um dos parâmetros de entrada nos valores máximos das variáveis de saída. Com esta análise foi possível concluir que o coeficiente de atrito, a espessura inicial da chapa, o coeficiente de encruamento e a constante C da lei de Swift, são os parâmetros de entrada que mais influenciam os resultados em estudo, sendo que as interações entre parâmetros de entrada só são relevantes na Alteração de Geometria. Por último, foram calculadas as distribuições dos índices de Sobol para todos os nós, de forma a analisar a influência dos parâmetros de entrada nos resultados ao longo do perfil em U. Esta análise permitiu concluir que o coeficiente de atrito, a constante C da lei de Swift e o coeficiente de encruamento são os parâmetros com mais influência na zona da aba e da curvatura superior, que o coeficiente de atrito é o parâmetro com mais influência na zona da curvatura inferior e que a espessura inicial e o coeficiente de atrito são os parâmetros com mais influência na zona da parede do perfil em U.
Forming processes are among the most used in the automotive, aeronautic and metalworking industry. The demand for quality enhanced products and lower production costs has encouraged the growing interest for a robust development and optimization of these processes which takes into account the inherent variability in them. In order to understand which are the most important sources of variability of a U-rail stamping process, this thesis presents a numerical study that aims to analyse the influence of the variability of eleven different input parameters (Young’s modulus, Poisson’s coefficient, anisotropy coefficients, parameters of Swift’s hardening law, initial thickness of the sheet metal, friction coefficient and Blank Holder force) in the forming process results (Equivalent Plastic Strain, Thickness Reduction, Springback, Geometry Modification and Punch Force).Initially the quasi-Monte Carlo method with a Sobol sequence was used to analyse the influence of the variability of the input parameters on the variability of the process results. In this phase, it is concluded that the variability of the input parameters affects all the results of the U-rail, mainly the Geometry Modification. Then the influence of each of the input parameters, for the maximum values of the output variables, is estimated by calculating the Sobol indices. With this analysis it was possible to conclude that the friction coefficient, the initial thickness of the sheet metal, the hardening coefficient and the Swift’s Law constant, C, are the input parameters that most influence the results under study, and the interactions between input parameters are only relevant for the Geometry Modification. Lastly, distributions of the Sobol indices were calculated for all nodes, in order to analyse the influence of the input parameters throughout the U-rail geometry. This analysis allowed to conclude that the friction coefficient, Swift’s law constant C and the hardening coefficient are the parameters with the most influence in the tab and upper curvature areas, that the friction coefficient is the parameter with the most influence in the lower curvature area and that the initial thickness and the friction coefficient are the parameters with the most influence in the wall area of the U-rail.
APA, Harvard, Vancouver, ISO, and other styles
34

Câmara, Bernardo Monteiro dos Santos de Aguiar da. "Análise de sensibilidade do ensaio biaxial em provete cruciforme." Master's thesis, 2021. http://hdl.handle.net/10316/94300.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia
Neste trabalho é realizada uma análise de sensibilidade para avaliar a influência dos parâmetros do material nos resultados do ensaio biaxial em provete cruciforme. Esta análise é feita com auxílio dos índices de Sobol de 1ª ordem, que permitem quantificar a sensibilidade de cada parâmetro do material, e dos índices de Sobol totais, que permitem avaliar a influência das interações entre esses parâmetros. Nesta análise foram estudados os parâmetros s0, k e n da lei de encruamento de Swift, e F, G e N do critério de plasticidade anisotrópico de Hill’48. A influência desses parâmetros é avaliada nos resultados do ensaio biaxial (variáveis de saída), nomeadamente, as forças máximas segundo 0x e 0y, as deformações principais e1 e e2, a deformação plástica equivalente eeq, a redução de espessura e a trajetória de deformação. Todos estes resultados do ensaio biaxial foram obtidos numericamente, com recurso ao programa de elementos finitos DD3IMP.Inicialmente, recorreu-se à análise de sensibilidade para avaliar a influência dos parâmetros do material nos valores máximos das variáveis de saída. Desta análise concluiu-se que: os parâmetros K e n da lei de Swift são os mais influentes no valor máximo das forças e da deformação principal e1; o valor máximo da deformação plástica equivalente é afetado principalmente pelos parâmetros G e n, embora os restantes parâmetros também tenham uma influência relevante; os valores máximos de e2 e da redução de espessura são essencialmente afetados pelo parâmetro G do critério de Hill’48.Posteriormente analisou-se os parâmetros que mais influenciam os resultados do ensaio em cada região do provete. Desta análise concluiu-se no geral que: G é o parâmetro que mais influencia as variáveis de saída no centro do provete e no braço do eixo 0x; F é o parâmetro que mais influencia as variáveis de saída no braço do eixo 0y; as variáveis de saída na zona do raio de curvatura do provete são mais sensíveis aos parâmetros n, N e G. No geral, pode-se afirmar que os parâmetros do critério de plasticidade Hill’48 são os que mais influenciam os resultados ao longo do provete cruciforme.
In this work, a sensitivity analysis is performed to evaluate the influence of the material parameters on the results of the biaxial test on a cruciform specimen. This analysis is done with the help of 1st order Sobol indices, which quantify the sensitivity of each material parameter, and total Sobol indices, which allow to evaluate the influence of the interactions between these parameters. In this analysis, the parameters s0, K and n of the Swift hardening law, and the parameters F, G and N of the Hill'48 anisotropic yield criterion were studied. The influence of these parameters is evaluated on the results of the biaxial test (output variables, namely, the maximum forces along 0x and 0y, the principal strains e1 and e2, the equivalent plastic strain eeq, the thickness reduction and the strain paths. All these results of the biaxial test were numerically obtained, resorting to the finite element software DD3IMP.Initially, the sensitivity analysis was used to assess the influence of the material parameters on the maximum values of the output parameters. From this analysis, it was concluded that: the parameters K and n of Swift's law are the most influential in the maximum values of the forces and the principal strain e1; the maximum value of the equivalent plastic strain is mainly affected by G and n, although the remaining parameters also have a relevant influence; the maximum values of e2 and thickness reduction are essentially affected by the parameter G of the Hill’48 criterion.Afterwards, the parameters that most influence the test results in each specimen region were analyzed. From this analysis, it was concluded in general that: G is the parameter that most influences the output variables in the center of the specimen and in the 0x arm; F is the parameter that most influences the output variables in the 0y arm; the output variables in the radius of curvature of the specimen are more sensitive to the parameters n, N and G. In general, it can be stated that the parameters of the Hill’48 yield criterion are the ones that most influence the results along the cruciform specimen.
APA, Harvard, Vancouver, ISO, and other styles
35

Gouveia, Ana Margarida Lopes. "Inditex croup equity research - multiples and sensitivity analysis." Master's thesis, 2021. http://hdl.handle.net/10362/122793.

Full text
Abstract:
The aim of this Master thesis is to present a fair valuation of Inditex, a Global Fashion Retail group, headquartered in Spain, and operating in a completely new era of fast fashion, where technology leads the sector ahead of time. The Group is a recognized giant of the Apparel industry, performing its activity in 92 markets, with more than 7300 stores, where almost 4900 are located in Europe. Not to mention, the diversified concept inherent to each of the eight brands owned by Inditex, from more accessible to exclusive high-quality products: Zara, Bershka, Pull & Bear, Stradivarius, Zara Home, Oysho, Massimo Dutti and Uterqüe. Highlight, that Zara is responsible for almost 70% of the total revenues generated. Following the initial purpose, it was performed an extensive investigation on Inditex, analysing the current macroeconomic situation, together with the industry mains risks and future perspectives of growth, not disregarding the Group’s key competitors and its influence on the market. All of this, in order to be able to ground the two valuation approaches, Discounted Cash Flow and the Multiples valuation, although only the last one is developed in this report. Leading us to choose a TOP 6, of comparable companies grounded on the EBITD A margin, Revenue growth and Debt-to-Equity factors, which highlighted the fact that the market is undervaluing the stock and, that the comparables are providing biased results, influenced by the current Pandemic crisis. As a projected target price varying from 15,33€to 23,48€,according to the EV/EBIT multiple, much below the perspective price. In this way, the results were challenged through a sensitivity analysis, where the major risks where considered, culminating into the conclusion that only another wave of the COVID-19 virus, the prolongation of the economic recover until 2023, or an increase in Cost of Goods Sold mainly due to cost inefficiency and sustainability expenses, could lead the recommendation to change, to Sell and Hold, respectively .In addition, we considered a delay in the vaccine efficacy, which led us to conclude the delay cannot be higher than 6 months for the investor to profit in terms of capital and dividend gains. Leading to the affirmation that Inditex will be able to overcome these uncertainties and deliver a 17,40% return to the investors at the end of 2021, through our main valuation approach, analysed in the principal report (annexed).Hence, we reinforce our Buy recommendation.
APA, Harvard, Vancouver, ISO, and other styles
36

Lima, Andréa Romero Esteves. "Laser para clarear dentes vitais: justifica-se indicar esta técnica?" Master's thesis, 2020. http://hdl.handle.net/10284/9412.

Full text
Abstract:
O Clareamento Dental , é um método conservador para se "levar os dentes á uma tonalidade mais clara do que a cor natural dos mesmos" (FDA - USA) .O objetivo deste trabalho ,é verificar evidências das duas principais vantagens desta técnica em dentes vitais, tendo o uso do laser como auxiliar: maior eficácia no resultado final do tratamento e menor sensibilidade pósoperatória. Neste trabalho, foi realizada uma revisão da literatura utilizando as bases de dados Pubmed /Medline. Concluiu-se que o laser (dependendo da dosimetria e do tipo de aparelho utilizado), tem mais eficácia do que as técnicas sem auxílio de fontes de luz. Porém as evidências não confirmam menor sensibilidade no pós-operatório.
Dental bleaching, is a conservative method to "bring teeth to a brighter ton than its natural color"(FDA/USA). The objective of this work is to verify evidences of the two main advantages of this technique in vital teeth, using the Laser as an auxiliary: greater efficacy in the final result of the treatment and less postoperative sensitivity. In this work, a literature review was performed using the Pubmed/Medline databases. It was concluded that the laser (depending on dosimetry and the type of device used), is more effective than techniques without the aid of light sources. However, the evidence does not confirm less sensitivity in the postoperative period.
APA, Harvard, Vancouver, ISO, and other styles
37

Pelto, Joan McAlmond. "Field sensitivity of Native American students at Oregon State University, as determined by the group embedded figures test." Thesis, 1991. http://hdl.handle.net/1957/37426.

Full text
Abstract:
Historically, Native American students have not achieved academic success; ethnic and racial stereotypes are common explanations for the problem. Many perceive the Native American student to be lacking either academic preparation or socio-cultural support for success. A review of the literature showed emerging research which indicates that significant differences can be shown between the learning styles of Native American students and their non-Native counterparts. It has been claimed that these differences may account for some of the differences in academic achievement. The purpose of this study was an attempt to document more thoroughly the differences between the learning styles of Native American and non-Native university students, employing the Group Embedded Figures Test (GEFT). The GEFT measures degree of field sensitivity, a measure of the degree to which an individual is affected by the surrounding environment or situation within which learning is to take place. It has been postulated that Native American children tend to be reared in a culture which promotes field dependent learning styles. Conversely, children reared in families promoting strong individual identity tend to be more field independent. The results of administering the GEFT to a group of Native American university students and to a comparison group of non-Native students supported the theory. A numerical difference of 2.1, on a scale of 1 to 18, was found between the mean scores of the two study groups with the Native American students scoring in the more field dependent domain. The mean score for the Native American student study group was 9.7, while that for the comparison group was 11.8. In addition to ethnic differences, the data from this study showed differences from previously established norms both by age and gender. Based on the results of this study, educators may be urged to consider the style in which a student learns before categorizing him or her as academically deficient. Further study of learning styles of Native American students and concommitantly of teaching styles which are best suited to Native American students is recommended.
Graduation date: 1991
APA, Harvard, Vancouver, ISO, and other styles
38

Gohore, Bi Goue D. "Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques." Thèse, 2010. http://hdl.handle.net/1866/4535.

Full text
Abstract:
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis
The heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
APA, Harvard, Vancouver, ISO, and other styles
39

Gohore, Bi Gouê Denis. "Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques." Thèse, 2010. http://hdl.handle.net/1866/4535.

Full text
Abstract:
L'hétérogénéité de réponses dans un groupe de patients soumis à un même régime thérapeutique doit être réduite au cours d'un traitement ou d'un essai clinique. Deux approches sont habituellement utilisées pour atteindre cet objectif. L'une vise essentiellement à construire une observance active. Cette approche se veut interactive et fondée sur l'échange ``médecin-patient '', ``pharmacien-patient'' ou ``vétérinaire-éleveurs''. L'autre plutôt passive et basée sur les caractéristiques du médicament, vise à contrôler en amont cette irrégularité. L'objectif principal de cette thèse était de développer de nouvelles stratégies d'évaluation et de contrôle de l'impact de l'irrégularité de la prise du médicament sur l'issue thérapeutique. Plus spécifiquement, le premier volet de cette recherche consistait à proposer des algorithmes mathématiques permettant d'estimer efficacement l'effet des médicaments dans un contexte de variabilité interindividuelle de profils pharmacocinétiques (PK). Cette nouvelle méthode est fondée sur l'utilisation concommitante de données \textit{in vitro} et \textit{in vivo}. Il s'agit de quantifier l'efficience ( c-à-dire efficacité plus fluctuation de concentrations \textit{in vivo}) de chaque profil PK en incorporant dans les modèles actuels d'estimation de l'efficacité \textit{in vivo}, la fonction qui relie la concentration du médicament de façon \textit{in vitro} à l'effet pharmacodynamique. Comparativement aux approches traditionnelles, cette combinaison de fonction capte de manière explicite la fluctuation des concentrations plasmatiques \textit{in vivo} due à la fonction dynamique de prise médicamenteuse. De plus, elle soulève, à travers quelques exemples, des questions sur la pertinence de l'utilisation des indices statiques traditionnels ($C_{max}$, $AUC$, etc.) d'efficacité comme outil de contrôle de l'antibiorésistance. Le deuxième volet de ce travail de doctorat était d'estimer les meilleurs temps d'échantillonnage sanguin dans une thérapie collective initiée chez les porcs. Pour ce faire, nous avons développé un modèle du comportement alimentaire collectif qui a été par la suite couplé à un modèle classique PK. À l'aide de ce modèle combiné, il a été possible de générer un profil PK typique à chaque stratégie alimentaire particulière. Les données ainsi générées, ont été utilisées pour estimer les temps d'échantillonnage appropriés afin de réduire les incertitudes dues à l'irrégularité de la prise médicamenteuse dans l'estimation des paramètres PK et PD . Parmi les algorithmes proposés à cet effet, la méthode des médianes semble donner des temps d'échantillonnage convenables à la fois pour l'employé et pour les animaux. Enfin, le dernier volet du projet de recherche a consisté à proposer une approche rationnelle de caractérisation et de classification des médicaments selon leur capacité à tolérer des oublis sporadiques. Méthodologiquement, nous avons, à travers une analyse globale de sensibilité, quantifié la corrélation entre les paramètres PK/PD d'un médicament et l'effet d'irrégularité de la prise médicamenteuse. Cette approche a consisté à évaluer de façon concomitante l'influence de tous les paramètres PK/PD et à prendre en compte, par la même occasion, les relations complexes pouvant exister entre ces différents paramètres. Cette étude a été réalisée pour les inhibiteurs calciques qui sont des antihypertenseurs agissant selon un modèle indirect d'effet. En prenant en compte les valeurs des corrélations ainsi calculées, nous avons estimé et proposé un indice comparatif propre à chaque médicament. Cet indice est apte à caractériser et à classer les médicaments agissant par un même mécanisme pharmacodynamique en terme d'indulgence à des oublis de prises médicamenteuses. Il a été appliqué à quatre inhibiteurs calciques. Les résultats obtenus étaient en accord avec les données expérimentales, traduisant ainsi la pertinence et la robustesse de cette nouvelle approche. Les stratégies développées dans ce projet de doctorat sont essentiellement fondées sur l'analyse des relations complexes entre l'histoire de la prise médicamenteuse, la pharmacocinétique et la pharmacodynamique. De cette analyse, elles sont capables d'évaluer et de contrôler l'impact de l'irrégularité de la prise médicamenteuse avec une précision acceptable. De façon générale, les algorithmes qui sous-tendent ces démarches constitueront sans aucun doute, des outils efficients dans le suivi et le traitement des patients. En outre, ils contribueront à contrôler les effets néfastes de la non-observance au traitement par la mise au point de médicaments indulgents aux oublis
The heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
APA, Harvard, Vancouver, ISO, and other styles
40

Mannschatz, Theresa. "Site evaluation approach for reforestations based on SVAT water balance modeling considering data scarcity and uncertainty analysis of model input parameters from geophysical data." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28829.

Full text
Abstract:
Extensive deforestations, particularly in the (sub)tropics, have led to intense soil degradation and erosion with concomitant reduction in soil fertility. Reforestations or plantations on those degraded sites may provide effective measures to mitigate further soil degradation and erosion, and can lead to improved soil quality. However, a change in land use from, e.g., grassland to forest may have a crucial impact on water balance. This may affect water availability even under humid tropical climate conditions where water is normally not a limiting factor. In this context, it should also be considered that according to climate change projections rainfall may decrease in some of these regions. To mitigate climate change related problems (e.g. increases in erosion and drought), reforestations are often carried out. Unfortunately, those measures are seldom completely successful, because the environmental conditions and the plant specific requirements are not appropriately taken into account. This is often due to data-scarcity and limited financial resources in tropical regions. For this reason, innovative approaches are required that are able to measure environmental conditions quasi-continuously in a cost-effective manner. Simultaneously, reforestation measures should be accompanied by monitoring in order to evaluate reforestation success and to mitigate, or at least to reduce, potential problems associated with reforestation (e.g. water scarcity). To avoid reforestation failure and negative implications on ecosystem services, it is crucial to get insights into the water balance of the actual ecosystem, and potential changes resulting from reforestation. The identification and prediction of water balance changes as a result of reforestation under climate change requires the consideration of the complex feedback system of processes in the soil-vegetation-atmosphere continuum. Models that account for those feedback system are Soil-Vegetation-Atmosphere-Transfer (SVAT) models. For the before-mentioned reasons, this study targeted two main objectives: (i) to develop and test a method combination for site evaluation under data scarcity (i.e. study requirements) (Part I) and (ii) to investigate the consequences of prediction uncertainty of the SVAT model input parameters, which were derived using geophysical methods, on SVAT modeling (Part II). A water balance modeling approach was set at the center of the site evaluation approach. This study used the one-dimensional CoupModel, which is a SVAT model. CoupModel requires detailed spatial soil information for (i) model parameterization, (ii) upscaling of model results and accounting for local to regional-scale soil heterogeneity, and (iii) monitoring of changes in soil properties and plant characteristics over time. Since traditional approaches to soil and vegetation sampling and monitoring are time consuming and expensive (and therefore often limited to point information), geophysical methods were used to overcome this spatial limitation. For this reason, vis-NIR spectroscopy (visible to near-infrared wavelength range) was applied for the measurement of soil properties (physical and chemical), and remote sensing to derive vegetation characteristics (i.e. leaf area index (LAI)). Since the estimated soil properties (mainly texture) could be used to parameterize a SVAT model, this study investigated the whole processing chain and related prediction uncertainty of soil texture and LAI, and their impact on CoupModel water balance prediction uncertainty. A greenhouse experiment with bamboo plants was carried out to determine plant-physiological characteristics needed for CoupModel parameterization. Geoelectrics was used to investigate soil layering, with the intent of determining site-representative soil profiles for model parameterization. Soil structure was investigated using image analysis techniques that allow the quantitative assessment and comparability of structural features. In order to meet the requirements of the selected study approach, the developed methodology was applied and tested for a site in NE-Brazil (which has low data availability) with a bamboo plantation as the test site and a secondary forest as the reference (reference site). Nevertheless, the objective of the thesis was not the concrete modeling of the case study site, but rather the evaluation of the suitability of the selected methods to evaluate sites for reforestations and to monitor their influence on the water balance as well as soil properties. The results (Part III) highlight that one needs to be aware of the measurement uncertainty related to SVAT model input parameters, so for instance the uncertainty of model input parameters such as soil texture and leaf area index influences meaningfully the simulated model water balance output. Furthermore, this work indicates that vis-NIR spectroscopy is a fast and cost-efficient method for soil measurement, mapping, and monitoring of soil physical (texture) and chemical (N, TOC, TIC, TC) properties, where the quality of soil prediction depends on the instrument (e.g. sensor resolution), the sample properties (i.e. chemistry), and the site characteristics (i.e. climate). Additionally, also the sensitivity of the CoupModel with respect to texture prediction uncertainty with respect to surface runoff, transpiration, evaporation, evapotranspiration, and soil water content depends on site conditions (i.e. climate and soil type). For this reason, it is recommended that SVAT model sensitivity analysis be carried out prior to field spectroscopic measurements to account for site specific climate and soil conditions. Nevertheless, mapping of the soil properties estimated via spectroscopy using kriging resulted in poor interpolation (i.e. weak variograms) results as a consequence of a summation of uncertainty arising from the method of field measurement to mapping (i.e. spectroscopic soil prediction, kriging error) and site-specific ‘small-scale’ heterogeneity. The selected soil evaluation method (vis-NIR spectroscopy, structure comparison using image analysis, traditional laboratory analysis) showed that there are significant differences between the bamboo soil and the adjacent secondary forest soil established on the same soil type (Vertisol). Reflecting on the major study results, it can be stated that the selected method combination is a way forward to a more detailed and efficient way to evaluate the suitability of a specific site for reforestation. The results of this study provide insights into where and when during soil and vegetation measurements a high measurement accuracy is required to minimize uncertainties in SVAT modeling.:I. Development of method combination for site evaluation for reforestations in data-scarce regions .... 23 2. Motivation, objectives and study approach .... 24 2.1. Introduction and study motivation .... 24 2.1.1. Research objectives and hypotheses ..... 27 2.1.2. Study approach ..... 28 3. Site selection and characterization procedure .... 32 3.1. On large scale – landscape segmentation .... 32 3.2. On local scale - case study site selection and characterization .... 34 3.2.1. Available data and characterization of identified case study site .... 34 3.2.2. Spatial distribution of soil properties - soil structure, bulk density and porosity .... 37 4. Eco-hydrological modeling - deriving plant-physiological model parameters .... 50 4.1. Introduction .... 50 4.2. Motivation and objectives ..... 52 4.3. Methods ... 53 4.3.1. Design of greenhouse experiment .... 53 4.3.2. Derivation of climate time-series .... 56 4.3.3. Plant variables and response to water availability .... 59 4.4. Results and discussion .... 62 4.4.1. Soil sample analysis .... 62 4.4.2. Measured time-series .... 63 4.4.3. Plant response to drought stress ..... 67 4.4.4. Water balance approach and estimated time-series of plant transpiration .... 71 4.4.5. Derived SVAT model plant input parameter .... 73 5. Near-surface geophysics .... 75 5.1. Vis-NIR spectroscopy of soils .... 76 5.1.1. Methods and materials .... 77 5.1.2. Results and discussion .... 79 5.2. Geoelectrics ..... 88 5.2.1. Methods and materials .... 89 5.2.2. Results and discussion .... 94 6. Remote sensing of vegetation .... 102 6.1. Introduction .... 102 6.2. Methods and materials .... 103 6.2.1. RapidEye images and ATCOR description .... 103 6.2.2. Satellite image preparation and atmospheric correction .... 104 6.2.3. LAI field measurement and computation of vegetation indices .... 105 6.2.4. Establishment of empirical LAI retrieval model .... 106 6.3. Results and discussion .... 108 6.3.1. Vegetation index ranking .... 108 II. Uncertainty analysis of model input parameters from geophysical data .... 110 7. Deriving soil properties - vis-NIR spectroscopy technique .... 111 7.1. Motivation .... 111 7.2. Materials and methods .... 113 7.2.1. Study sites .... 113 7.2.2. Samples used for uncertainty analysis .... 114 7.2.3. Vis-NIR spectral measurement, chemometric spectral data transformation and spectroscopic modeling .... 116 7.2.4. Assessment statistics .... 118 7.2.5. Inter-instrument calibration model transferability for soil monitoring .... 119 7.2.6. Analysis of SVAT model sensitivity to soil texture .... 121 7.3. Results and discussion .... 124 7.3.1. Effect of pre-processing transformation methods on prediction accuracy .... 124 7.3.2. Effect of spectral resampling .... 125 7.3.3. Accuracy of soil property prediction .... 127 7.3.4. Spectrometer comparison .... 133 7.3.5. Inter-instrument transferability .... 134 7.3.6. Precision of spectroscopic predictions in the context of SVAT modeling ....139 7.4. Conclusion .... 146 8. Deriving vegetation properties - remote sensing techniques .... 149 8.1. Motivation .... 149 8.2. Materials and methods .... 150 8.2.1. Study site .... 150 8.2.2. RapidEye images .... 150 8.2.3. Satellite image preparation .... 152 8.2.4. Atmospheric correction with parameter variation .... 152 8.2.5. Investigation of two successive images .... 154 8.2.6. LAI field measurement and computation of vegetation indices .... 155 8.2.7. Establishment of empirical LAI retrieval model .... 155 8.2.8. Sensitivity of SVAT model to LAI uncertainty .... 157 8.3. Results and discussion .... 157 8.3.1. Influence of atmospheric correction on RapidEye bands .... 158 8.3.2. Uncertainty of LAI field measurements and empirical relationship .... 161 8.3.3. Influence of ATCOR parameterization on LAI estimation .... 161 8.3.4. LAI variability within one image .... 167 8.3.5. LAI differences within the overlapping area of successive images recorded on the same date .... 171 8.3.6. Evaluation of LAI uncertainty in context of SVAT modeling ... 174 8.4. Conclusion .... 176 III. Synthesis .... 178 9. Summary of results and conclusions .... 179 10. Perspectives .... 185
Umfangreiche Abholzungen, besonders in den (Sub-)Tropen, habe zu intensiver Bodendegradierung und Erosion mit einhergehendem Verlust der Bodenfruchtbarkeit geführt. Eine wirksame Maßnahme zur Vermeidung fortschreitender Bodendegradierung und Erosion sind Aufforstungen auf diesen Flächen, die bisweilen zu einer verbesserten Bodenqualität führen können. Eine Umwandlung von Grünland zu Wald kann jedoch einen entscheidenden Einfluss auf den Wasserhaushalt haben. Selbst unter humid-tropischen Klimabedingungen, wo Wasser in der Regel kein begrenzender Faktor ist, können sich Aufforstungen negativ auf die Wasserverfügbarkeit auswirken. In diesem Zusammenhang muss auch berücksichtigt werden, dass Klimamodelle eine Abnahme der Niederschläge in einigen dieser Regionen prognostizieren. Um die Probleme, die mit dem Klimawandel in Verbindung stehen zu mildern (z.B. Zunahme von Erosion und Dürreperioden), wurden und werden bereits umfangreiche Aufforstungsmaßnahmen durchgeführt. Viele dieser Maßnahmen waren nicht immer umfassend erfolgreich, weil die Umgebungsbedingungen sowie die pflanzenspezifischen Anforderungen nicht angemessen berücksichtigt wurden. Dies liegt häufig an der schlechten Datengrundlage sowie an den in vielen Entwicklungs- und Schwellenländern begrenzter verfügbarer finanzieller Mittel. Aus diesem Grund werden innovative Ansätze benötigt, die in der Lage sind quasi-kontinuierlich und kostengünstig die Standortbedingungen zu erfassen und zu bewerten. Gleichzeitig sollte eine Überwachung der Wiederaufforstungsmaßnahme erfolgen, um deren Erfolg zu bewerten und potentielle negative Effekte (z.B. Wasserknappheit) zu erkennen und diesen entgegenzuwirken bzw. reduzieren zu können. Um zu vermeiden, dass Wiederaufforstungen fehlschlagen oder negative Auswirkungen auf die Ökosystemdienstleistungen haben, ist es entscheidend, Kenntnisse vom tatsächlichen Wasserhaushalt des Ökosystems zu erhalten und Änderungen des Wasserhaushalts durch Wiederaufforstungen vorhersagen zu können. Die Ermittlung und Vorhersage von Wasserhaushaltsänderungen infolge einer Aufforstung unter Berücksichtigung des Klimawandels erfordert die Berücksichtigung komplex-verzahnter Rückkopplungsprozesse im Boden-Vegetations-Atmosphären Kontinuum. Hydrologische Modelle, die explizit den Einfluss der Vegetation auf den Wasserhaushalt untersuchen sind Soil-Vegetation-Atmosphere-Transfer (SVAT) Modelle. Die vorliegende Studie verfolgte zwei Hauptziele: (i) die Entwicklung und Erprobung einer Methodenkombination zur Standortbewertung unter Datenknappheit (d.h. Grundanforderung des Ansatzes) (Teil I) und (ii) die Untersuchung des Einflusses der mit geophysikalischen Methoden vorhergesagten SVAT-Modeleingangsparameter (d.h. Vorhersageunsicherheiten) auf die Modellierung (Teil II). Eine Wasserhaushaltsmodellierung wurde in den Mittelpunkt der Methodenkombination gesetzt. In dieser Studie wurde das 1D SVAT Model CoupModel verwendet. CoupModel benötigen detaillierte räumliche Bodeninformationen (i) zur Modellparametrisierung, (ii) zum Hochskalierung von Modellergebnissen unter Berücksichtigung lokaler und regionaler Bodenheterogenität, und (iii) zur Beobachtung (Monitoring) der zeitlichen Veränderungen des Bodens und der Vegetation. Traditionelle Ansätze zur Messung von Boden- und Vegetationseigenschaften und deren Monitoring sind jedoch zeitaufwendig, teuer und beschränken sich daher oft auf Punktinformationen. Ein vielversprechender Ansatz zur Überwindung der räumlichen Einschränkung sind die Nutzung geophysikalischer Methoden. Aus diesem Grund wurden vis-NIR Spektroskopie (sichtbarer bis nah-infraroter Wellenlängenbereich) zur quasi-kontinuierlichen Messung von physikalischer und chemischer Bodeneigenschaften und Satelliten-basierte Fernerkundung zur Ableitung von Vegetationscharakteristika (d.h. Blattflächenindex (BFI)) eingesetzt. Da die mit geophysikalisch hergeleiteten Bodenparameter (hier Bodenart) und Pflanzenparameter zur Parametrisierung eines SVAT Models verwendet werden können, wurde die gesamte Prozessierungskette und die damit verbundenen Unsicherheiten und deren potentiellen Auswirkungen auf die Wasserhaushaltsmodellierung mit CoupModel untersucht. Ein Gewächshausexperiment mit Bambuspflanzen wurde durchgeführt, um die zur CoupModel Parametrisierung notwendigen pflanzenphysio- logischen Parameter zu bestimmen. Geoelektrik wurde eingesetzt, um die Bodenschichtung der Untersuchungsfläche zu untersuchen und ein repräsentatives Bodenprofil zur Modellierung zu definieren. Die Bodenstruktur wurde unter Verwendung einer Bildanalysetechnik ausgewertet, die die qualitativen Bewertung und Vergleichbarkeit struktureller Merkmale ermöglicht. Um den Anforderungen des gewählten Standortbewertungsansatzes gerecht zu werden, wurde die Methodik auf einem Standort mit einer Bambusplantage und einem Sekundärregenwald (als Referenzfläche) in NO-Brasilien (d.h. geringe Datenverfügbarkeit) entwickelt und getestet. Das Ziel dieser Arbeit war jedoch nicht die Modellierung dieses konkreten Standortes, sondern die Bewertung der Eignung des gewählten Methodenansatzes zur Standortbewertung für Aufforstungen und deren zeitliche Beobachtung, als auch die Bewertung des Einfluss von Aufforstungen auf den Wasserhaushalt und die Bodenqualität. Die Ergebnisse (Teil III) verdeutlichen, dass es notwendig ist, sich den potentiellen Einfluss der Messunsicherheiten der SVAT Modelleingangsparameter auf die Modellierung bewusst zu sein. Beispielsweise zeigte sich, dass die Vorhersageunsicherheiten der Bodentextur und des BFI einen bedeutenden Einfluss auf die Wasserhaushaltsmodellierung mit CoupModel hatte. Die Arbeit zeigt weiterhin, dass vis-NIR Spektroskopie zur schnellen und kostengünstigen Messung, Kartierung und Überwachung boden-physikalischer (Bodenart) und -chemischer (N, TOC, TIC, TC) Eigenschaften geeignet ist. Die Qualität der Bodenvorhersage hängt vom Instrument (z.B. Sensorauflösung), den Probeneigenschaften (z.B. chemische Zusammensetzung) und den Standortmerkmalen (z.B. Klima) ab. Die Sensitivitätsanalyse mit CoupModel zeigte, dass der Einfluss der spektralen Bodenartvorhersageunsicherheiten auf den mit CoupModel simulierten Oberflächenabfluss, Evaporation, Transpiration und Evapotranspiration ebenfalls von den Standortbedingungen (z.B. Klima, Bodentyp) abhängt. Aus diesem Grund wird empfohlen eine SVAT Model Sensitivitätsanalyse vor der spektroskopischen Feldmessung von Bodenparametern durchzuführen, um die Standort-spezifischen Boden- und Klimabedingungen angemessen zu berücksichtigen. Die Anfertigung einer Bodenkarte unter Verwendung von Kriging führte zu schlechten Interpolationsergebnissen in Folge der Aufsummierung von Mess- und Schätzunsicherheiten (d.h. bei spektroskopischer Feldmessung, Kriging-Fehler) und der kleinskaligen Bodenheterogenität. Anhand des gewählten Bodenbewertungsansatzes (vis-NIR Spektroskopie, Strukturvergleich mit Bildanalysetechnik, traditionelle Laboranalysen) konnte gezeigt werden, dass es bei gleichem Bodentyp (Vertisol) signifikante Unterschiede zwischen den Böden unter Bambus und Sekundärwald gibt. Anhand der wichtigsten Ergebnisse kann festgehalten werden, dass die gewählte Methodenkombination zur detailreicheren und effizienteren Standortuntersuchung und -bewertung für Aufforstungen beitragen kann. Die Ergebnisse dieser Studie geben einen Einblick darauf, wo und wann bei Boden- und Vegetationsmessungen eine besonders hohe Messgenauigkeit erforderlich ist, um Unsicherheiten bei der SVAT Modellierung zu minimieren.:I. Development of method combination for site evaluation for reforestations in data-scarce regions .... 23 2. Motivation, objectives and study approach .... 24 2.1. Introduction and study motivation .... 24 2.1.1. Research objectives and hypotheses ..... 27 2.1.2. Study approach ..... 28 3. Site selection and characterization procedure .... 32 3.1. On large scale – landscape segmentation .... 32 3.2. On local scale - case study site selection and characterization .... 34 3.2.1. Available data and characterization of identified case study site .... 34 3.2.2. Spatial distribution of soil properties - soil structure, bulk density and porosity .... 37 4. Eco-hydrological modeling - deriving plant-physiological model parameters .... 50 4.1. Introduction .... 50 4.2. Motivation and objectives ..... 52 4.3. Methods ... 53 4.3.1. Design of greenhouse experiment .... 53 4.3.2. Derivation of climate time-series .... 56 4.3.3. Plant variables and response to water availability .... 59 4.4. Results and discussion .... 62 4.4.1. Soil sample analysis .... 62 4.4.2. Measured time-series .... 63 4.4.3. Plant response to drought stress ..... 67 4.4.4. Water balance approach and estimated time-series of plant transpiration .... 71 4.4.5. Derived SVAT model plant input parameter .... 73 5. Near-surface geophysics .... 75 5.1. Vis-NIR spectroscopy of soils .... 76 5.1.1. Methods and materials .... 77 5.1.2. Results and discussion .... 79 5.2. Geoelectrics ..... 88 5.2.1. Methods and materials .... 89 5.2.2. Results and discussion .... 94 6. Remote sensing of vegetation .... 102 6.1. Introduction .... 102 6.2. Methods and materials .... 103 6.2.1. RapidEye images and ATCOR description .... 103 6.2.2. Satellite image preparation and atmospheric correction .... 104 6.2.3. LAI field measurement and computation of vegetation indices .... 105 6.2.4. Establishment of empirical LAI retrieval model .... 106 6.3. Results and discussion .... 108 6.3.1. Vegetation index ranking .... 108 II. Uncertainty analysis of model input parameters from geophysical data .... 110 7. Deriving soil properties - vis-NIR spectroscopy technique .... 111 7.1. Motivation .... 111 7.2. Materials and methods .... 113 7.2.1. Study sites .... 113 7.2.2. Samples used for uncertainty analysis .... 114 7.2.3. Vis-NIR spectral measurement, chemometric spectral data transformation and spectroscopic modeling .... 116 7.2.4. Assessment statistics .... 118 7.2.5. Inter-instrument calibration model transferability for soil monitoring .... 119 7.2.6. Analysis of SVAT model sensitivity to soil texture .... 121 7.3. Results and discussion .... 124 7.3.1. Effect of pre-processing transformation methods on prediction accuracy .... 124 7.3.2. Effect of spectral resampling .... 125 7.3.3. Accuracy of soil property prediction .... 127 7.3.4. Spectrometer comparison .... 133 7.3.5. Inter-instrument transferability .... 134 7.3.6. Precision of spectroscopic predictions in the context of SVAT modeling ....139 7.4. Conclusion .... 146 8. Deriving vegetation properties - remote sensing techniques .... 149 8.1. Motivation .... 149 8.2. Materials and methods .... 150 8.2.1. Study site .... 150 8.2.2. RapidEye images .... 150 8.2.3. Satellite image preparation .... 152 8.2.4. Atmospheric correction with parameter variation .... 152 8.2.5. Investigation of two successive images .... 154 8.2.6. LAI field measurement and computation of vegetation indices .... 155 8.2.7. Establishment of empirical LAI retrieval model .... 155 8.2.8. Sensitivity of SVAT model to LAI uncertainty .... 157 8.3. Results and discussion .... 157 8.3.1. Influence of atmospheric correction on RapidEye bands .... 158 8.3.2. Uncertainty of LAI field measurements and empirical relationship .... 161 8.3.3. Influence of ATCOR parameterization on LAI estimation .... 161 8.3.4. LAI variability within one image .... 167 8.3.5. LAI differences within the overlapping area of successive images recorded on the same date .... 171 8.3.6. Evaluation of LAI uncertainty in context of SVAT modeling ... 174 8.4. Conclusion .... 176 III. Synthesis .... 178 9. Summary of results and conclusions .... 179 10. Perspectives .... 185
Extensos desmatamentos que estão sendo feitos especialmente nos trópicos e sub-trópicos resultam em uma intensa degradação do solo e num aumento da erosão gerando assim uma redução na sua fertilidade. Reflorestamentos ou plantações nestas áreas degradadas podem ser medidas eficazes para atenuar esses problemas e levar a uma melhoria da qualidade do mesmo. No entanto, uma mudança no uso da terra, por exemplo de pastagem para floresta pode ter um impacto crucial no balanço hídrico e isso pode afetar a disponibilidade de água, mesmo sob condições de clima tropical úmido, onde a água normalmente não é um fator limitante. Devemos levar também em consideração que de acordo com projeções de mudanças climáticas, as precipitações em algumas dessas regiões também diminuirão agravando assim, ainda mais o quadro apresentado. Para mitigar esses problemas relacionados com as alterações climáticas, reflorestamentos são frequentemente realizados mas raramente são bem-sucedidos, pois condições ambientais como os requisitos específicos de cada espécie de planta, não são devidamente levados em consideração. Isso é muitas vezes devido, não só pela falta de dados, como também por recursos financeiros limitados, que são problemas comuns em regiões tropicais. Por esses motivos, são necessárias abordagens inovadoras que devam ser capazes de medir as condições ambientais quase continuamente e de maneira rentável. Simultaneamente com o reflorestamento, deve ser feita uma monitoração a fim de avaliar o sucesso da atividade e para prevenir, ou pelo menos, reduzir os problemas potenciais associados com o mesmo (por exemplo, a escassez de água). Para se evitar falhas e reduzir implicações negativas sobre os ecossistemas, é crucial obter percepções sobre o real balanço hídrico e as mudanças que seriam geradas por esse reflorestamento. Por este motivo, esta tese teve como objetivo desenvolver e testar uma combinação de métodos para avaliação de áreas adequadas para reflorestamento. Com esse intuito, foi colocada no centro da abordagem de avaliação a modelagem do balanço hídrico local, que permite a identificação e estimação de possíveis alterações causadas pelo reflorestamento sob mudança climática considerando o sistema complexo de realimentação e a interação de processos do continuum solo-vegetação-atmosfera. Esses modelos hidrológicos que investigam explicitamente a influência da vegetação no equilíbrio da água são conhecidos como modelos Solo-Vegetação-Atmosfera (SVAT). Esta pesquisa focou em dois objetivos principais: (i) desenvolvimento e teste de uma combinação de métodos para avaliação de áreas que sofrem com a escassez de dados (pré-requisito do estudo) (Parte I), e (ii) a investigação das consequências da incerteza nos parâmetros de entrada do modelo SVAT, provenientes de dados geofísicos, para modelagem hídrica (Parte II). A fim de satisfazer esses objetivos, o estudo foi feito no nordeste brasileiro,por representar uma área de grande escassez de dados, utilizando como base uma plantação de bambu e uma área de floresta secundária. Uma modelagem do balanço hídrico foi disposta no centro da metodologia para a avaliação de áreas. Este estudo utilizou o CoupModel que é um modelo SVAT unidimensional e que requer informações espaciais detalhadas do solo para (i) a parametrização do modelo, (ii) aumento da escala dos resultados da modelagem, considerando a heterogeneidade do solo de escala local para regional e (iii) o monitoramento de mudanças nas propriedades do solo e características da vegetação ao longo do tempo. Entretanto, as abordagens tradicionais para amostragem de solo e de vegetação e o monitoramento são demorados e caros e portanto muitas vezes limitadas a informações pontuais. Por esta razão, métodos geofísicos como a espectroscopia visível e infravermelho próximo (vis-NIR) e sensoriamento remoto foram utilizados respectivamente para a medição de propriedades físicas e químicas do solo e para derivar as características da vegetação baseado no índice da área foliar (IAF). Como as propriedades estimadas de solo (principalmente a textura) poderiam ser usadas para parametrizar um modelo SVAT, este estudo investigou toda a cadeia de processamento e as incertezas de previsão relacionadas à textura de solo e ao IAF. Além disso explorou o impacto destas incertezas criadas sobre a previsão do balanço hídrico simulado por CoupModel. O método geoelétrico foi aplicado para investigar a estratificação do solo visando a determinação de um perfil representante. Já a sua estrutura foi explorada usando uma técnica de análise de imagens que permitiu a avaliação quantitativa e a comparabilidade dos aspectos estruturais. Um experimento realizado em uma estufa com plantas de bambu (Bambusa vulgaris) foi criado a fim de determinar as caraterísticas fisiológicas desta espécie que posteriormente seriam utilizadas como parâmetros para o CoupModel. Os resultados do estudo (Parte III) destacam que é preciso estar consciente das incertezas relacionadas à medição de parâmetros de entrada do modelo SVAT. A incerteza presente em alguns parâmetros de entrada como por exemplo, textura de solo e o IAF influencia significantemente a modelagem do balanço hídrico. Mesmo assim, esta pesquisa indica que vis-NIR espectroscopia é um método rápido e economicamente viável para medir, mapear e monitorar as propriedades físicas (textura) e químicas (N, TOC, TIC, TC) do solo. A precisão da previsão dessas propriedades depende do tipo de instrumento (por exemplo da resolução do sensor), da propriedade da amostra (a composição química por exemplo) e das características das condições climáticas da área. Os resultados apontam também que a sensitividade do CoupModel à incerteza da previsão da textura de solo em respeito ao escoamento superficial, transpiração, evaporação, evapotranspiração e ao conteúdo de água no solo depende das condições gerais da área (por exemplo condições climáticas e tipo de solo). Por isso, é recomendado realizar uma análise de sensitividade do modelo SVAT prior a medição espectral do solo no campo, para poder considerar adequadamente as condições especificas do área em relação ao clima e ao solo. Além disso, o mapeamento de propriedades de solo previstas pela espectroscopia usando o kriging, resultou em interpolações de baixa qualidade (variogramas fracos) como consequência da acumulação de incertezas surgidas desde a medição no campo até o seu mapeamento (ou seja, previsão do solo via espectroscopia, erro do kriging) e heterogeneidade especifica de uma pequena escala. Osmétodos selecionados para avaliação das áreas (vis-NIR espectroscopia, comparação da estrutura de solo por meio de análise de imagens, análise de laboratório tradicionais) revelou a existência de diferenças significativas entre o solo sob bambu e o sob floresta secundária, apesar de ambas terem sido estabelecidas no mesmo tipo de solo (vertissolo). Refletindo sobre os principais resultados do estudo, pode-se afirmar que a combinação dos métodos escolhidos e aplicados representam uma forma mais detalhada e eficaz de avaliar se uma determinada área é adequada para ser reflorestada. Os resultados apresentados fornecem percepções sobre onde e quando, durante a medição do solo e da vegetação, é necessário se ter uma precisão mais alta a fim de minimizar incertezas potenciais na modelagem com o modelo SVAT.:I. Development of method combination for site evaluation for reforestations in data-scarce regions .... 23 2. Motivation, objectives and study approach .... 24 2.1. Introduction and study motivation .... 24 2.1.1. Research objectives and hypotheses ..... 27 2.1.2. Study approach ..... 28 3. Site selection and characterization procedure .... 32 3.1. On large scale – landscape segmentation .... 32 3.2. On local scale - case study site selection and characterization .... 34 3.2.1. Available data and characterization of identified case study site .... 34 3.2.2. Spatial distribution of soil properties - soil structure, bulk density and porosity .... 37 4. Eco-hydrological modeling - deriving plant-physiological model parameters .... 50 4.1. Introduction .... 50 4.2. Motivation and objectives ..... 52 4.3. Methods ... 53 4.3.1. Design of greenhouse experiment .... 53 4.3.2. Derivation of climate time-series .... 56 4.3.3. Plant variables and response to water availability .... 59 4.4. Results and discussion .... 62 4.4.1. Soil sample analysis .... 62 4.4.2. Measured time-series .... 63 4.4.3. Plant response to drought stress ..... 67 4.4.4. Water balance approach and estimated time-series of plant transpiration .... 71 4.4.5. Derived SVAT model plant input parameter .... 73 5. Near-surface geophysics .... 75 5.1. Vis-NIR spectroscopy of soils .... 76 5.1.1. Methods and materials .... 77 5.1.2. Results and discussion .... 79 5.2. Geoelectrics ..... 88 5.2.1. Methods and materials .... 89 5.2.2. Results and discussion .... 94 6. Remote sensing of vegetation .... 102 6.1. Introduction .... 102 6.2. Methods and materials .... 103 6.2.1. RapidEye images and ATCOR description .... 103 6.2.2. Satellite image preparation and atmospheric correction .... 104 6.2.3. LAI field measurement and computation of vegetation indices .... 105 6.2.4. Establishment of empirical LAI retrieval model .... 106 6.3. Results and discussion .... 108 6.3.1. Vegetation index ranking .... 108 II. Uncertainty analysis of model input parameters from geophysical data .... 110 7. Deriving soil properties - vis-NIR spectroscopy technique .... 111 7.1. Motivation .... 111 7.2. Materials and methods .... 113 7.2.1. Study sites .... 113 7.2.2. Samples used for uncertainty analysis .... 114 7.2.3. Vis-NIR spectral measurement, chemometric spectral data transformation and spectroscopic modeling .... 116 7.2.4. Assessment statistics .... 118 7.2.5. Inter-instrument calibration model transferability for soil monitoring .... 119 7.2.6. Analysis of SVAT model sensitivity to soil texture .... 121 7.3. Results and discussion .... 124 7.3.1. Effect of pre-processing transformation methods on prediction accuracy .... 124 7.3.2. Effect of spectral resampling .... 125 7.3.3. Accuracy of soil property prediction .... 127 7.3.4. Spectrometer comparison .... 133 7.3.5. Inter-instrument transferability .... 134 7.3.6. Precision of spectroscopic predictions in the context of SVAT modeling ....139 7.4. Conclusion .... 146 8. Deriving vegetation properties - remote sensing techniques .... 149 8.1. Motivation .... 149 8.2. Materials and methods .... 150 8.2.1. Study site .... 150 8.2.2. RapidEye images .... 150 8.2.3. Satellite image preparation .... 152 8.2.4. Atmospheric correction with parameter variation .... 152 8.2.5. Investigation of two successive images .... 154 8.2.6. LAI field measurement and computation of vegetation indices .... 155 8.2.7. Establishment of empirical LAI retrieval model .... 155 8.2.8. Sensitivity of SVAT model to LAI uncertainty .... 157 8.3. Results and discussion .... 157 8.3.1. Influence of atmospheric correction on RapidEye bands .... 158 8.3.2. Uncertainty of LAI field measurements and empirical relationship .... 161 8.3.3. Influence of ATCOR parameterization on LAI estimation .... 161 8.3.4. LAI variability within one image .... 167 8.3.5. LAI differences within the overlapping area of successive images recorded on the same date .... 171 8.3.6. Evaluation of LAI uncertainty in context of SVAT modeling ... 174 8.4. Conclusion .... 176 III. Synthesis .... 178 9. Summary of results and conclusions .... 179 10. Perspectives .... 185
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography