Tesi sul tema "Scenario uncertainty"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Scenario uncertainty.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Scenario uncertainty".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Mott, Lacroix Kelly, Ashley Hullinger, Mark Apel, William Brandau e Sharon B. Megdal. "Using Scenario Planning to Prepare for Uncertainty in Rural Watersheds". College of Agriculture, University of Arizona (Tucson, AZ), 2015. http://hdl.handle.net/10150/593579.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
10 pp.
Planning for an uncertain future presents many challenges. Thinking systematically and creatively about what is in store through a process called scenario planning can help illuminate options for action and improve decision-making. This guide focuses on a process for developing scenarios to help communities and watershed groups explore what might happen in the years to come, make more informed decisions today, and build a watershed management process. The systematic approach to scenario planning described here is based on the lessons learned through a yearlong scenario planning process in the Upper Gila Watershed in southeastern Arizona and Water Resource Research Center’s (WRRC) research on scenario planning.
2

Cooksey, Kenneth Daniel. "A portfolio approach to design in the presence of scenario-based uncertainty". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49036.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Current aircraft conceptual design practices result in the selection of a single (hopefully) Pareto optimal design to be carried forward into preliminary design. This paradigm is based on the assumption that carrying a significant number of concepts forward is too costly and thus early down-selection between competing concepts is necessary. However, this approach requires that key architectural design decisions which drive performance and market success are fixed very early in the design process, sometimes years before the aircraft actually goes to market. In the presence of uncertainty, if the design performance is examined for individual scenarios as opposed to measuring performance of the design with aggregate statistics, the author finds that the single concept approach can lead to less than desirable design outcomes. This thesis proposes an alternate conceptual design paradigm which leverages principles from economics (specifically the Nobel prize-winning modern portfolio theory) to improve design outcomes by intelligently selecting a small well diversified portfolio of concepts to carry forward through preliminary design, thus reducing the risk from external events that are outside of the engineer’s control. This alternate paradigm is expected to result in an increase in the overall profit by increasing the probability that the final design matches market needs at the time it goes to market. This thesis presents a portfolio based design approach, which leverages dynamic programming to enable a stochastic optimization of alternative portfolios of concepts. This optimization returns an optimized portfolio of concepts which are iteratively pruned to improve design outcomes in the presence of scenario-driven uncertainties. While dynamic programming is identified as a means for doing a stochastic portfolio optimization, dynamic programming is an analytical optimization process which suffers heavily from the curse of dimensionality. As a result, a new hybrid stochastic optimization process called the Evolutionary Cooperative Optimization with Simultaneous Independent Sub-optimization (ECOSIS) has been introduced. The ECOSIS algorithm leverages a co-evolutionary algorithm to optimize a multifaceted problem under uncertainty. ECOSIS allows for a stochastic portfolio optimization including the desired benefit-to-cost tradeoff for a well-diversified portfolio at the size and scope required for use in design problems. To demonstrate the applicability and value of a portfolio based design approach, an example application of the approach to the selection of a new 300 passenger aircraft is presented.
3

Tshimanga, Raphael Muamba. "Hydrological uncertainty analysis and scenario-based streamflow modelling for the Congo River Basin". Thesis, Rhodes University, 2012. http://hdl.handle.net/10962/d1006158.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The effects of climate and environmental change are likely to exacerbate water stress in Africa over the next five decades. It appears obvious, therefore, that large river basins with considerable total renewable water resources will play a prominent role in regional cooperation to alleviate the pressure of water scarcity within Africa. However, managing water resources in the large river basins of Africa involves problems of data paucity, lack of technical resources and the sheer scale of the problem. These river basins are located in regions that are characterized by poverty, low levels of economic development and little food security. The rivers provide multiple goods and services that include hydro-power, water supply, fisheries, agriculture, transportation, and maintenance of aquatic ecosystems. Sustainable water resources management is a critical issue, but there is almost always insufficient data available to formulate adequate management strategies. These basins therefore represent some of the best test cases for the practical application of the science associated with the Predictions in Ungauged Basins (PUB). The thesis presents the results of a process-based hydrological modelling study in the Congo Basin. One of the primary objectives of this study was to establish a hydrological model for the whole Congo Basin, using available historical data. The secondary objective of the study was to use the model and assess the impacts of future environmental change on water resources of the Congo Basin. Given the lack of adequate data on the basin physical characteristics, the preliminary work consisted of assessing available global datasets and building a database of the basin physical characteristics. The database was used for both assessing relationships of similarities between features of physiographic settings in the basin (Chapters 3 and 4), and establishing models that adequately represent the basin hydrology (Chapters 5, 6, and 7). The representative model of the Congo Basin hydrology was then used to assess the impacts of future environmental changes on water resources availability of the Congo Basin (Chapter 8). Through assessment of the physical characteristics of the basin, relationships of similarities were used to determine homogenous regions with regard to rainfall variability, physiographic settings, and hydrological responses. The first observation that comes from this study is that these three categories of regional groups of homogenous characteristics are sensible with regards to their geographical settings, but the overlap and apparent relationships between them are weak. An explanation of this observation is that there are insufficient data, particularly associated with defining sub-surface processes, and it is possible that additional data would have assisted in the discrimination of more homogenous groups and better links between the different datasets. The model application in this study consisted of two phases: model calibration, using a manual approach, and the application of a physically-based a priori parameter estimation approach. While the first approach was designed to assess the general applicability of the model and identify major errors with regard to input data and model structure, the second approach aimed to establish an understanding of the processes and identify useful relationships between the model parameters and the variations in real hydrological processes. The second approach was also designed to quantify the sensitivity of the model outputs to the parameters of the model and to encompass information sharing between the basin physical characteristics and quantifying the parameters of the model. Collectively, the study’s findings show that these two approaches work well and are appropriate to represent the real hydrological processes of Congo Basin. The secondary objective of this study was achieved by forcing the hydrological model developed for the Congo Basin with downscaled Global Climate Model (GCMs) data in order to assess scenarios of change and future possible impacts on water resources availability within the basin. The results provide useful lessons in terms of basin-wide adaptation measures to future climates. The lessons suggest that there is a risk of developing inappropriate adaptation measures to future climate change based on large scale hydrological response, as the response at small scales shows a completely different picture from that which is based on large scale predictions. While the study has concluded that the application of the hydrological model has been successful and can be used with some degree of confidence for enhanced decision making, there remain a number of uncertainties and opportunities to improve the methods used for water resources assessment within the basin. The focus of future activities from the perspective of practical application should be on improved access to data collection to increase confidence in model predictions, on dissemination of the knowledge generated by this study, and on training in the use of the developed water resources assessment techniques.
4

Robinson, Amanda Jane. "Uncertainty in hydrological scenario modelling : an investigation using the Mekong River Basin, SE Asia". Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10046108/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis investigates sources of uncertainty in hydrological scenario modelling. It quantifies the extent to which decisions made during the modelling process affect river flow projections under climate change. Sources of uncertainty explored include choice of: General Circulation Model (GCM) for generation of climate projections; hydrological model code; potential evapotranspiration (PET) method; spatial distribution of meteorological inputs within the hydrological model; and baseline precipitation dataset. The Mekong River Basin is employed as a case study site. Initially a MIKE SHE model is developed for the Mekong using, where possible, the same data as an earlier model (SLURP). Climate scenarios investigated include a set based on a 2 °C increase in global mean temperature simulated by seven GCMs. There are considerable differences in scenario discharges between GCMs, ranging from catchment-wide increases or decreases in mean discharge, to spatially varying responses. Inter-GCM differences are largely driven by differences in precipitation, rather than PET or temperature. Results from MIKE SHE, SLURP and Mac-PDM.09 (a global hydrological model) are compared. Although inter-hydrological model uncertainty is evident and sometimes considerable, its magnitude is generally smaller than GCM uncertainty. The MIKE SHE model is then recalibrated to provide five further models, each employing alternative PET methods. PET method impacts scenario changes in PET and hence scenario discharges. However, GCM-related uncertainty for change in mean discharge is on average ~3.5 times greater than PET method-related uncertainty. Additional MIKE SHE models are developed using alternative meteorological input spatial distributions and an alternative baseline precipitation dataset. These sources of uncertainty are comparable in magnitude; both are much smaller than PET- and GCM-related uncertainty. Climate impact assessment using one MIKE SHE model and an ensemble of 41 CMIP5 GCMs for the RCP4.5 scenario provides further confirmation that GCMrelated uncertainty is the dominant source of uncertainty for Mekong river flow projections.
5

Mahadevan, Srisudha. "Network Selection Algorithm for Satisfying Multiple User Constraints Under Uncertainty in a Heterogeneous Wireless Scenario". University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1302550606.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Calfa, Bruno Abreu. "Data Analytics Methods for Enterprise-wide Optimization Under Uncertainty". Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/575.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This dissertation primarily proposes data-driven methods to handle uncertainty in problems related to Enterprise-wide Optimization (EWO). Datadriven methods are characterized by the direct use of data (historical and/or forecast) in the construction of models for the uncertain parameters that naturally arise from real-world applications. Such uncertainty models are then incorporated into the optimization model describing the operations of an enterprise. Before addressing uncertainty in EWO problems, Chapter 2 deals with the integration of deterministic planning and scheduling operations of a network of batch plants. The main contributions of this chapter include the modeling of sequence-dependent changeovers across time periods for a unitspecific general precedence scheduling formulation, the hybrid decomposition scheme using Bilevel and Temporal Lagrangean Decomposition approaches, and the solution of subproblems in parallel. Chapters 3 to 6 propose different data analytics techniques to account for stochasticity in EWO problems. Chapter 3 deals with scenario generation via statistical property matching in the context of stochastic programming. A distribution matching problem is proposed that addresses the under-specification shortcoming of the originally proposed moment matching method. Chapter 4 deals with data-driven individual and joint chance constraints with right-hand side uncertainty. The distributions are estimated with kernel smoothing and are considered to be in a confidence set, which is also considered to contain the true, unknown distributions. The chapter proposes the calculation of the size of the confidence set based on the standard errors estimated from the smoothing process. Chapter 5 proposes the use of quantile regression to model production variability in the context of Sales & Operations Planning. The approach relies on available historical data of actual vs. planned production rates from which the deviation from plan is defined and considered a random variable. Chapter 6 addresses the combined optimal procurement contract selection and pricing problems. Different price-response models, linear and nonlinear, are considered in the latter problem. Results show that setting selling prices in the presence of uncertainty leads to the use of different purchasing contracts.
7

Hollmann, Dominik. "Supply chain network design under uncertainty and risk". Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6407.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We consider the research problem of quantitative support for decision making in supply chain network design (SCND). We first identify the requirements for a comprehensive SCND as (i) a methodology to select uncertainties, (ii) a stochastic optimisation model, and (iii) an appropriate solution algorithm. We propose a process to select a manageable number of uncertainties to be included in a stochastic program for SCND. We develop a comprehensive two-stage stochastic program for SCND that includes uncertainty in demand, currency exchange rates, labour costs, productivity, supplier costs, and transport costs. Also, we consider conditional value at risk (CV@R) to explore the trade-off between risk and return. We use a scenario generator based on moment matching to represent the multivariate uncertainty. The resulting stochastic integer program is computationally challenging and we propose a novel iterative solution algorithm called adaptive scenario refinement (ASR) to process the problem. We describe the rationale underlying ASR, validate it for a set of benchmark problems, and discuss the benefits of the algorithm applied to our SCND problem. Finally, we demonstrate the benefits of the proposed model in a case study and show that multiple sources of uncertainty and risk are important to consider in the SCND. Whereas in the literature most research is on demand uncertainty, our study suggests that exchange rate uncertainty is more important for the choice of optimal supply chain strategies in international production networks. The SCND model and the use of the coherent downside risk measure in the stochastic program are innovative and novel; these and the ASR solution algorithm taken together make contributions to knowledge.
8

Persson, Klas. "Quantifying pollutant spreading and the risk of water pollution in hydrological catchments : A solute travel time-based scenario approach". Doctoral thesis, Stockholms universitet, Institutionen för naturgeografi och kvartärgeologi (INK), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-63465.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The research presented in the thesis develops an approach for the estimation and mapping of pollutant spreading in catchments and the associated uncertainty and risk of pollution. The first step in the approach is the quantification and mapping of statistical and geographical distributions of advective solute travel times from pollutant input locations to downstream recipients. In the second step the travel time distributions are used to quantify and map the spreading of specific pollutants and the related risk of water pollution. In both steps, random variability of transport properties and processes is accounted for within a probabilistic framework, while different scenarios are used to account for statistically unquantifiable uncertainty about system characteristics, processes and future developments. This scenario approach enables a transparent analysis of uncertainty effects that is relatively easy to interpret. It also helps identify conservative assumptions and pollutant situations for which further investigations are most needed in order to reduce the uncertainty. The results for different investigated scenarios can further be used to assess the total risk to exceed given water quality standards downstream of pollutant sources. Specific thesis results show that underestimation of pollutant transport variability, and in particular of those transport pathways with much shorter than average travel times, may lead to substantial underestimation of pollutant spreading in catchment areas. By contrast, variations in pollutant attenuation rate generally lead to lower estimated spreading than do constant attenuation conditions. A scenario of constant attenuation rate and high travel time variability, with a large fraction of relatively short travel times, therefore appears to be a reasonable conservative scenario to use when information is lacking for more precise determination of actual transport and attenuation conditions.
9

Valente, Christian. "Design and architecture of a stochastic programming modelling system". Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/6249.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Decision making under uncertainty is an important yet challenging task; a number of alternative paradigms which address this problem have been proposed. Stochastic Programming (SP) and Robust Optimization (RO) are two such modelling ap-proaches, which we consider; these are natural extensions of Mathematical Pro-gramming modelling. The process that goes from the conceptualization of an SP model to its solution and the use of the optimization results is complex in respect to its deterministic counterpart. Many factors contribute to this complexity: (i) the representation of the random behaviour of the model parameters, (ii) the interfac-ing of the decision model with the model of randomness, (iii) the difficulty in solving (very) large model instances, (iv) the requirements for result analysis and perfor-mance evaluation through simulation techniques. An overview of the software tools which support stochastic programming modelling is given, and a conceptual struc-ture and the architecture of such tools are presented. This conceptualization is pre-sented as various interacting modules, namely (i) scenario generators, (ii) model generators, (iii) solvers and (iv) performance evaluation. Reflecting this research, we have redesigned and extended an established modelling system to support modelling under uncertainty. The collective system which integrates these other-wise disparate set of model formulations within a common framework is innovative and makes the resulting system a powerful modelling tool. The introduction of sce-nario generation in the ex-ante decision model and the integration with simulation and evaluation for the purpose of ex-post analysis by the use of workflows is novel and makes a contribution to knowledge.
10

Minton, Mark A. "Uncertainty and sensitivity analysis of a fire-induced accident scenario involving binary variables and mechanistic codes". Thesis, Cambridge Massachusetts Institute of Technology, 2010. http://hdl.handle.net/10945/4939.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
CIVINS
Approved for public release; distribution is unlimited
In response to the transition by the United States Nuclear Regulatory Commission (NRC) to a risk-informed, performance-based fire protection rulemaking standard, Fire Probabilistic Risk Assessment (PRA) methods have been improved, particularly in the areas of advanced fire modeling and computational methods. In order to gain a more meaningful insight into the methods currently in practice, it was decided that a scenario incorporating the various elements of uncertainty specific to a fire PRA would be analyzed. Fire induced Main Control Room (MCR) abandonment scenarios are a significant contributor to the total Core Damage Frequency (CDF) estimate of many operating nuclear power plants. This report details the simultaneous application of state-of-the-art model and parameter uncertainty techniques to develop a defensible distribution of the probability of a forced MCR abandonment caused by a fire within a MCR benchboard. This report details the simultaneous application of state-of-the-art model and parameter uncertainty techniques to develop a defensible distribution of the probability of a forced MCR abandonment caused by a fire within a MCR.
11

Minton, Mark A. (Mark Aaron). "Uncertainty and sensitivity analysis of a fire-induced accident scenario involving binary variables and mechanistic codes". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/76589.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Nucl. E. and S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2010.
"September 2010." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 72-74).
In response to the transition by the United States Nuclear Regulatory Commission (NRC) to a risk-informed, performance-based fire protection rulemaking standard, Fire Probabilistic Risk Assessment (PRA) methods have been improved, particularly in the areas of advanced fire modeling and computational methods. As the methods for the quantification of fire risk are improved, the methods for the quantification of the uncertainties must also be improved. In order to gain a more meaningful insight into the methods currently in practice, it was decided that a scenario incorporating the various elements of uncertainty specific to a fire PRA would be analyzed. The NRC has validated and verified five fire models to simulate the effects of fire growth and propagation in nuclear power plants. Although these models cover a wide range of sophistication, epistemic uncertainties resulting from the assumptions and approximations used within the model are always present. The uncertainty of a model prediction is not only dependent on the uncertainties of the model itself, but also on how the uncertainties in input parameters are propagated throughout the model. Inputs to deterministic fire models are often not precise values, but instead follow statistical distributions. The fundamental motivation for assessing model and parameter uncertainties is to combine the results in an effort to calculate a cumulative probability of exceeding a given threshold. This threshold can be for equipment damage, time to alarm, habitability of spaces, etc. Fire growth and propagation is not the only source of uncertainty present in a fire-induced accident scenario. Statistical models are necessary to develop estimates of fire ignition frequency and the probability that a fire will be suppressed. Human Reliability Analysis (HRA) is performed to determine the probability that operators will correctly perform manual actions even with the additional complications of a fire present. Fire induced Main Control Room (MCR) abandonment scenarios are a significant contributor to the total Core Damage Frequency (CDF) estimate of many operating nuclear power plants. Many of the resources spent on fire PRA are devoted to quantification of the probability that a fire will force operators to abandon the MCR and take actions from a remote location. However, many current PRA practitioners feel that effect of MCR fires have been overstated. This report details the simultaneous application of state-of-the-art model and parameter uncertainty techniques to develop a defensible distribution of the probability of a forced MCR abandonment caused by a fire within a MCR benchboard. These results are combined with the other elements of uncertainty present in a fire-induced MCR abandonment scenario to develop a CDF distribution that takes into account the interdependencies between the factors. In addition, the input factors having the strongest influence on the final results are identified so that operators, regulators, and researchers can focus their efforts to mitigate the effects of this class of fire-induced accident scenario.
by Mark A. Minton.
Nucl.E.and S.M.
12

Stanford, Joseph S. M. (Joseph Marsh) Massachusetts Institute of Technology. "Possible futures for fully automated vehicles : using scenario planning and system dynamics to grapple with uncertainty". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/105319.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, Engineering Systems Division, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 146-148).
It is widely expected that fully automated vehicles (also commonly referred to as "driverless" or "self-driving" cars) will significantly change transportation systems in the United States and around the world. By reducing or eliminating many of the costs and disincentives of travel by automobile, these vehicles may have the potential to radically alter many of the inherent dynamics that have governed transportation systems since the advent of the automobile. To date, however, there has been very little structured analysis of these potential changes. Most of the existing literature addresses the technical challenges facing vehicle automation technology or considers immediate effects on the transportation system, usually analyzing single effects in isolation. Very little attention appears to have been paid to multiple simultaneous interactions that may occur across the transportation system and potential feedback effects that may arise among elements of the system. This thesis examines how the transportation system might react to the widespread introduction of fully automated vehicles (AVs), specifically considering how these reactions will affect total usage of automobiles, as measured by vehicle miles traveled (VMT). For the purpose of this thesis, the system boundary is broadly drawn-potential system responses are considered within the transportation system itself (consisting of existing users, vehicles, and infrastructure) and the "macro-system" (which includes broader economic, regulatory, social, and political dimensions). To address the wide range of uncertainties involved, scenario-planning techniques are used to develop and explore three scenarios that span a range of important variables. Within each scenario, system dynamics methodology is used to explore potential system reactions to the scenario assumptions and to consider the ultimate implications for VMT. The main insight from this analysis is that unstable responses (rapid movement to the extremes) appear more likely than steady transitions to "moderate" states. When the scenarios assume behavior can change substantially, the structure of the system suggests either that strong and growing forces will cause automobiles to become even more dominant over other modes than they are today (and VMT will rise dramatically), or public transit will become increasingly more appealing and assume a growing role (and VMT will drop substantially). The challenge of predicting the underlying behavioral changes is substantial: Who can say with any certainty how people will use a technology that provides point-to-point, self-directed, self-scheduled travel, with no requirement for attention or effort by a human occupant, potentially at higher speeds, in greater comfort, and with safer operation than today's automobiles? There are simply not enough existing data and no precedent for such analysis. Given the potential for unstable outcomes, depending on the desired outcome, it may be critical for policy-makers to consider the initial conditions of AV deployment, as these may have a substantial impact on the transportation system over the long term.
by Joseph Stanford.
S.M. in Engineering and Management
13

Apap, Robert M. "Models and Computational Strategies for Multistage Stochastic Programming under Endogenous and Exogenous Uncertainties". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This dissertation addresses the modeling and solution of mixed-integer linear multistage stochastic programming problems involving both endogenous and exogenous uncertain parameters. We propose a composite scenario tree that captures both types of uncertainty, and we exploit its unique structure to derive new theoretical properties that can drastically reduce the number of non-anticipativity constraints (NACs). Since the reduced model is often still intractable, we discuss two special solution approaches. The first is a sequential scenario decomposition heuristic in which we sequentially solve endogenous MILP subproblems to determine the binary investment decisions, fix these decisions to satisfy the first-period and exogenous NACs, and then solve the resulting model to obtain a feasible solution. The second approach is Lagrangean decomposition. We present numerical results for a process network planning problem and an oilfield development planning problem. The results clearly demonstrate the efficiency of the special solution methods over solving the reduced model directly. To further generalize this work, we also propose a graph-theory algorithm for non-anticipativity constraint reduction in problems with arbitrary scenario sets. Finally, in a break from the rest of the thesis, we present the basics of stochastic programming for non-expert users.
14

Rizakou, Eleni. "Scenario-robustness methodology : an approach to flexible planning under uncertainty with an application to AIDS-related resource allocation". Thesis, London School of Economics and Political Science (University of London), 1995. http://etheses.lse.ac.uk/1368/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this thesis the problem of planning under uncertainty is examined. A classification of uncertainty is given with the purpose of identifying those areas where traditional methods for planning under uncertainty fail to prescribe suitable courses of action. Traditional planning methods have increasingly proved inadequate in their handling of the uncertainty inherent in complex and turbulent environments. Methodologies suitable to planning under uncertainty should attempt to preserve future flexibility, by keeping options open for later resolution. This thesis describes the development of Scenario-Robustness Methodology (SRM), a flexible methodology for planning under uncertainty. SRM uses scenario analysis to develop altemative futures, and robustness analysis to determine the most flexible options under those futures, for both the short and long term. A new criterion is proposed for evaluating the consequences of initial decisions in terms both of the positive options which are maintained and of the undesirable options still left open. This criterion is a composite measure which enables decision-makers to give relative weights to positive outcomes (robustness) or negative outcomes (debility), by varying a key parameter. A number of alternative measures of uncertainty which may be employed in a planning situation characterized by a set of initial decisions and a set of altemative future scenarios, are also examined. The coefficient of concordance W is found to be the most useful of such measures. An example is given of the application of SRM to an HIV/AIDS-related resource allocation problem. Planning for HIV/AIDS is selected as a suitable area of application because of the uncertainties surrounding the nature of the disease, the availability of treatments and their timing, and the size of the planned for population. SRM is used to assist in structuring the problem and to identify those initial commitments which are preferable in terms of flexibility. The problem structuring capability of SRM is of particular value since it initiates a process of reflection and negotiation which helps to incorporate in the analysis, in addition to flexibility, other relevant factors which will shape the final selection of an appropriate course of action.
15

Sdobnova, Alena, e Jakub Blaszkiewicz. "Analysis of An Uncertain Volatility Model in the framework of static hedging for different scenarios". Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2199.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

In Black-Scholes model, the parameters -a volatility and an interest rate were assumed as constants. In this thesis we concentrate on behaviour of the volatility as

a function and we find more realistic models for the volatility, which elimate a risk

connected with behaviour of the volatility of an underlying asset. That is

the reason why we will study the Uncertain Volatility Model. In Chapter

1 we will make some theoretical introduction to the Uncertain Volatility Model

introduced by Avellaneda, Levy and Paras and study how it behaves in the different scenarios. In

Chapter 2 we choose one of the scenarios. We also introduce the BSB equation

and try to make some modification to narrow the uncertainty bands using

the idea of a static hedging. In Chapter 3 we try to construct the proper

portfolio for the static hedging and compare the theoretical results with the real

market data from the Stockholm Stock Exchange.

16

Alkhairy, Ibrahim H. "Designing and Encoding Scenario-based Expert Elicitation for Large Conditional Probability Tables". Thesis, Griffith University, 2020. http://hdl.handle.net/10072/390794.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis focuses on the general problem of asking experts to assess the likelihood of many scenarios, when there is insufficient time to ask about all possible scenarios. The challenge addressed here is one of experimental design: How to choose which scenarios are assessed; How to use that limited data to extrapolate information about the scenarios that remain unasked? In a mathematical sense, this problem can be constructed as a problem of expert elicitation, where experts are asked to quantify conditional probability tables (CPTs). Experts may be relied on, for example in the situation when empirical data is unavailable or limited. CPTs are used widely in statistical modelling to describe probabilistic relationships between an outcome and several factors. I consider two broad situations where CPTs are important components of quantitative models. Firstly experts are often asked to quantify CPTs that form the building blocks of Bayesian Networks (BNs). In one case study, CPTs describe how habitat suitability of feral pigs is related to various environmental factors, such as water quality and food availability. Secondly CPTs may also support a sensitivity analysis for large computer experiments, by examining how some outcome changes, as various factors are changed. Another case study uses CPTs to examine sensitivity to settings, for algorithms available through virtual laboratories, to map the geographic distribution of species such as the koala. An often-encountered problem is the sheer amount of information asked of the expert: the number of scenarios. Each scenario corresponds to a row of the CPT, and concerns a particular combination of factors, and the likely outcome. Currently most researchers arrange elicitation of CPTs by keeping the number of rows and columns in the CPT to a minimum, so that they need ask experts about no more than twenty or so scenarios. However in some practical problems, CPTs may need to involve more rows and columns, for example involving more than two factors, or factors which can take on more than two or three possible values. Here we propose a new way of choosing scenarios, that underpin the elicitation strategy, by taking advantage of experimental design to: ensure adequate coverage of all scenarios; and to make best use of the scarce resources like the valuable time of the experts. I show that this can be essentially constructed as a problem of how to better design choice of scenarios to elicit from a CPT. The main advantages of these designs is that they explore more of the design space compared to usual design choices like the one-factor-at-a-time (OFAT) design that underpins the popular encoding approach embedded in “CPT Calculator”. In addition this work tailors an under-utilized scenario-based elicitation method to ensure that the expert’s uncertainty was captured, together with their assessments, of the likelihood of each possible outcome. I adopt the more intuitive Outside-In Elicitation method to elicit the expert’s plausible range of assessed values, rather than the more common and reverse-order approach of eliciting their uncertainty around their best guess. Importantly this plausible range of values is more suitable for input into a new approach that was proposed for encoding scenario-based elicitation: Bayesian (rather than a Frequentist) interpretation. Whilst eliciting some scenarios from large CPTs, another challenge arises from the remaining CPT entries that are not elicited. This thesis shows how to adopt a statistical model to interpolate not only the missing CPT entries but also quantify the uncertainty for each scenario, which is new for these two situations: BNs and sensitivity analyses. For this purpose, I introduce the use of Bayesian generalized linear models (GLMs). The Bayesian updating framework also enables us to update the results of elicitation, by incorporating empirical data. The idea is to utilise scenarios elicited from experts to constructan informative Bayesian “prior” model. Then the prior information (e.g. about scenarios) is combined with the empirical data (e.g. from computer model runs), to update the posterior estimates of plausible outcomes (affecting all scenarios). The main findings showed that Bayesian inference suits the small data problem of encoding the expert’s mental model underlying their assessments, allowing uncertainty to vary about each scenario. In addition Bayesian inference provides rich feedback to the modeller and experts on the plausible influence of factors on the response, and whether any information was gained on their interactions. That information could be pivotal to designing the next phase of elicitation about habitat requirements or another phase of computer models. In this way, the Bayesian paradigm naturally supports a sequential approach to gradually accruing information about the issue at hand. As summarised above, the novel statistical methodology presented in this thesis also contributes to computer science. Specifically computation for Bayesian Networks and sensitivity analyses of large computer experiments can be re-designed to be more efficient. Here the expert knowledge is useful to complement the empirical data to inform a more comprehensive analyses.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
17

Beer, Simon. "Managing uncertainty in production system design during early-phase product development : A scenario-based case study within the commercial vehicle industry". Thesis, KTH, Industriell produktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273185.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Production systems of automotive companies are exposed to various change drivers in increasingly volatile market environments. In order to manage the different areas of complexity, time, costs and quality it is necessary to continually adapt and reconsider production systems. Besides other company internal and external change drivers, product development projects initiate such adaption measures in production systems design. Characteristic for production system design activities during early-phase product development is the inadequate data and information basis. Existing methodologies for production system design assume an already existing production system and the optimization towards specific key performance indicators. Other methodologies that accompany the product development process focus solely on specific production process aspects. Consequently, a methodology for holistic determination and consideration of uncertainties in production system design during early-phase production development is needed. Aim The aim of the thesis is therefore to develop, apply and evaluate a framework that enables a structured concretization of uncertainties concerning product development projects and other company external and internal uncertainties. Furthermore, the framework should facilitate identifying which impact those uncertainties could have on the existing production system and how to develop and evaluate countermeasures. Samples The study was conducted within the commercial vehicle industry and in cooperation with ScaniaCV AB located in Södertälje, Sweden as an industrial research partner. Methods Based on a literature review and in consideration of the needs of the industrial research partner, the requirements on the framework were defined. The individual sections of the framework incorporate adaptations of existing methodologies as well as methods purpose-developed in thecourse of the thesis. The application of the framework took place within the department of transmission manufacturing in the course of a new transmission development project. Results The results show that through the application of the framework, uncertainties connected to a new transmission development project were significantly reduced. Affected production system areas were identified and adequate measures in production system design were initiated. Conclusions The developed framework enables managing uncertainties in production system design during early phase product development. Furthermore, the structured way of working and documentationi ncreases the cooperation and communication between product and production development as well as the overall transparency within the development project.
Bakgrund Produktionssystem inom fordonsindustrin utsätts för olika förändringsfaktorer i en allt mer flyktig marknadsmiljö. För att hantera de motstridiga områdena komplexitet, tid, kostnad och kvalitet är det nödvändigt att hela tiden anpassa och ompröva produktionssystem. Förutom andra interna och externa förändringsfaktorer initierar produktutvecklingsprojekt sådana anpassningsåtgärder vid design av produktionssystem. Karaktäristisk för utvecklingen av produktionssystem i den tidiga fasen av produktutvecklingen är den otillräckliga data- och informationsbasen. Befintliga metoder för design av produktionssystembaseras på ett befintligt produktionssystem och optimering av vissa viktiga prestandaindikatorer. Andra metoder som följer med produktutvecklingsprocessen fokuserar uteslutande på specifika aspekter av produktionsprocessen. Därför krävs en metod för helhetsbestämning och hänsyn till osäkerheter i utformningen av produktionssystem i den tidiga fasen av produktionsutvecklingen. Mål Syftet med avhandlingen är därför att utveckla, tillämpa och utvärdera en metod som möjliggör en strukturerad specifikation av osäkerheter i produktutvecklingsprojekt och andra externa och interna osäkerheter i företaget. Dessutom bör metodiken göra det lättare att avgöra vilken påverkan dessa osäkerheter kan ha på det befintliga produktionssystemet och hur motåtgärder kan utvecklas och utvärderas. Forskningsomfång Studien genomfördes inom kommersiella fordonsindustrin och i samarbete med Scania CV AB baserat i Södertälje, Sverige, som en industriell forskningspartner. Metoder Baserat på en litteraturöversikt och med beaktande av den industriella forskningspartnerns behov definierades kraven för metodiken. I varje del av ramverket finns det både anpassningar av befintliga metoder och metoder som har utvecklats under arbetets gång. Ramverket användes inom tillverkningsavdelningen för växellådan som en del av ett nytt utvecklingsprojekt. Resultat Resultaten visar att tillämpningen av ramverket avsevärt minskade osäkerheten i samband med ett nytt utvecklingsprojekt. Påverkade områden i produktionssystemet identifierades och lämpliga åtgärder för utformningen av produktionssystemet inleddes. Slutsatser Studien visar att den utvecklade metodiken gör det möjligt att kontrollera osäkerheter i utformningen av produktionssystem i den tidiga fasen av produktutvecklingen. Dessutom förbättrar det strukturerade sättet att arbeta och dokumentera samarbetet och kommunikationen mellan produkt- och produktionsutveckling samt total öppenhet inom utvecklingsprojektet.
18

Sheikh, Hussin Siti Aida. "Employees Provident Fund (EPF) Malaysia : generic models for asset and liability management under uncertainty". Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/7505.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We describe Employees Provident Funds (EPF) Malaysia. We explain about Defined Contribution and Defined Benefit Pension Funds and examine their similarities and differences. We also briefly discuss and compare EPF schemes in four Commonwealth countries. A family of Stochastic Programming Models is developed for the Employees Provident Fund Malaysia. This is a family of ex-ante decision models whose main aim is to manage, that is, balance assets and liabilities. The decision models comprise Expected Value Linear Programming, Two Stage Stochastic Programming with recourse, Chance Constrained Programming and Integrated Chance Constraints Programming. For the last three decision models we use scenario generators which capture the uncertainties of asset returns, salary contributions and lump sum liabilities payments. These scenario generation models for Assets and liabilities were developed and calibrated using historical data. The resulting decisions are evaluated with in-sample analysis using typical risk adjusted performance measures. Out- of- sample testing is also carried out with a larger set of generated scenarios. The benefits of two stage stochastic programming over deterministic approaches on asset allocation as well as the amount of borrowing needed for each pre-specified growth dividend are demonstrated. The contributions of this thesis are i) an insightful overview of EPF ii) construction of scenarios for assets returns and liabilities with different values of growth dividend, that combine the Markov population model with the salary growth model and retirement payments iii) construction and analysis of generic ex-ante decision models taking into consideration uncertain asset returns and uncertain liabilities iv) testing and performance evaluation of these decisions in an ex-post setting.
19

BORZOOEI, SINA. "Model-based optimization and scenario planning for a large-scale water resource recovery facility". Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2708023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Several configurations of biological nutrient-removal(BNR) systems have been developed to improve nutrient removal and meet recently established stringent effluent permits for wastewater treatment plants (WWTPs). Since the majority of BNR configurations are modified versions of the conventional activated sludge process (ASP), they are simple to be implemented and cost-effective. Thus, BNR methods have gained great popularity. However, complex, nonlinear and dynamic nature of biological and biochemical processes which takes place in these systems, makes controlling their performance a challenging and not straightforward task. Mathematical models provide a valuable evaluationdecision makingtools for scientists and wastewater engineers to move forward to the optimization of various wastewater treatment processes including BNR. Two to complicated nature of themodeledprocesses, application of WWTP. Since the full data collection required to reducecomplexityWWTPmodeling projects is an expensive and time-consuming task, a data scarcity is the prevailingproblem. Additionally, the path of the WWTP due to its operational and design conditions, the pathway through which it ismodeled is also unique, challenging, and worth investigating. This thesis presents a stepwise approach for the model-based optimization and practical scenario planning of the BNR activated sludge system in Castiglione Torinese WRRF in Italy. Results of the plant from January 2009 to December 2016, in addition to information from the limited number of measurements and sampling campaigns conducted during this study. Knowledge obtained was further integrated with the implementation of WWTP model. Model for active reactors, 1-D Takács model for secondary clarifiers, and aeration, pumping and mixing energy consumption models to be dynamically simulated its performances. The uncertainty of the calibrated models was investigated by the Monte Carlo Analysis.BasedThe RWS. For further investigation of the plant performance, a performance assessment criterion (PAC) consisting of two main types of parameters concerning the effluent quality and energy consumption and production. SRT values ​​(10, 15, 20, 25, 30, 35 and as a part of the optimization study, the impact of the solid retention time (SRT) 40). For scenario planning, results obtained from SRT scenario analysis were implemented. In general, two types of scenarios were considered for various operational modes. First, scenarios proposed to rectify the problem of change from the limits of the operational parameters. Second, practical scenarios as concurrent strategies implemented by plant operators. finally, these practical and problem-solving scenarios were studied by means of the parameters involved in the PAC. These promising model-based optimization solutions are proposed to plant managers for full-scale testing and implementation. This study highlights the role of data collection process, designed for model development and calibration purposes, in performance investigation and resolution of treatment units, which can be achieved by routinely conducted data collections and sampling campaigns in conventional WWTPs. The results obtained in this thesis, evidence the viability of the model-based optimization and problem-solving scenario planning even with the presence of data sacristy and data quality problems. The results confirm the possibility of increasing the plant in the energy savings and simultaneous improvement in pollutant removal, for Castiglione Torinese WRRF. In various steps of modeling and scenario planning, the importance of continuous interaction with the different stakeholders is highlighted. It is proved that this interaction can improve the acceptance of the proposed scenarios.
20

Combier, Robert. "Risk-informed scenario-based technology and manufacturing evaluation of aircraft systems". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49046.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the last half century, the aerospace industry has seen a dramatic paradigm shift from a focus on performance-at-any-cost to product economics and value. The steady increase in product requirements, complexity and global competition has driven aircraft manufacturers to seek broad portfolios of advanced technologies. The development costs and cycle times of these technologies vary widely, and the resulting design environment is one where decisions must be made under substantial uncertainty. Modeling and simulation have recently become the standard practice for addressing these issues; detailed simulations and explorations of candidate future states of these systems help reduce a complex design problem into a comprehensible, manageable form where decision factors are prioritized. While there are still fundamental criticisms about using modeling and simulation, the emerging challenge becomes ``How do you best configure uncertainty analyses and the information they produce to address real world problems?” One such analysis approach was developed in this thesis by structuring the input, models, and output to answer questions about the risk and economic impact of technology decisions in future aircraft programs. Unlike other methods, this method placed emphasis on the uncertainty in the cumulative cashflow space as the integrator of economic viability. From this perspective, it then focused on exploration of the design and technology space to tailor the business case and its associated risk in the cash flow dimension. The methodology is called CASSANDRA and is intended to be executed by a program manager of a manufacturer working of the development of future concepts. The program manager has the ability to control design elements as well as the new technology allocation on that aircraft. She is also responsible for the elicitation of the uncertainty in those dimensions within control as well as the external scenarios (that are out of program control). The methodology was applied on a future single-aisle 150 passenger aircraft design. The overall methodology is compared to existing approaches and is shown to identify more economically robust design decisions under a set of at-risk program scenarios. Additionally, a set of metrics in the uncertain cumulative cashflow space were developed to assist the methodology user in the identification, evaluation, and selection of design and technology. These metrics are compared to alternate approaches and are shown to better identify risk efficient design and technology selections. At the modeling level, an approach is given to estimate the production quantity based on an enhanced Overall Evaluation Criterion method that captures the competitive advantage of the aircraft design. This model was needed as the assumption of production quantity is highly influential to the business case risk. Finally, the research explored the capacity to generate risk mitigation strategies in to two analysis configurations: when available data and simulation capacity are abundant, and when they are sparse or incomplete. The first configuration leverages structured filtration of Monte Carlo simulation results. The allocation of design and technology risk is then identified on the Pareto Frontier. The second configuration identifies the direction of robust risk mitigation based on the available data and limited simulation ability. It leverages a linearized approximation of the cashflow metrics and identifies the direction of allocation using the Jacobian matrix and its inversion.
21

Ilevbare, Imohiosen Michael. "An investigation into the treatment of uncertainty and risk in roadmapping : a framework and a practical process". Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245140.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis investigates roadmapping in the context of its application to strategic early-stage innovation planning. It is concerned with providing an understanding of how uncertainty and risk are manifested in roadmapping in this application, and with developing and testing a roadmapping process that supports appropriate treatment of uncertainty and risk. Roadmapping is an approach to early-stage innovation planning, which is strategic in nature. It is seeing increasing application in practice and receiving growing attention in management literature. There has, however, been a noticeable lack of attention to uncertainty and risk in roadmapping theory and practice (and generally in strategic planning and at innovation’s early-stages). This is despite the awareness that uncertainty and risk are fundamental to strategy and innovation (i.e. application domains of roadmapping), and that roadmapping is meant to deliver, as part of its benefits, the identification, resolution and communication of uncertainties and risks. There is very limited theoretical or practical direction on what this entails. It is this gap that the research reported in thesis addresses. The research is divided into two phases. The first phase explains the manifestations and mechanisms of uncertainty and risk in roadmapping. It also introduces ‘risk-aware roadmapping’, a concept of roadmapping that includes a conscious and explicit effort to address uncertainty and risk, and points out what the process would entail in terms of necessary steps and procedures. The research here is designed using mixed methods (a combination of experience surveys, archival analysis, and case studies). The second phase provides a practical risk-aware roadmapping process. This practical process is developed based on the results of the first phase, and is designed according to procedural action research. This thesis contributes to the fields of roadmapping, early-stage innovation and organisational sensemaking. It is found that factors related to the content, process and nature of roadmapping interact to influence the perception and treatment of uncertainty and risk. Characteristics of organisational sensemaking as theorised by Weick (1995) are explored in the light of the findings and challenged. Aspects of early-stage innovation including the generation and selection of innovation ideas are explored in the context of uncertainty and risk and important paradoxes and constraints at innovation’s early-stages.
22

Björkman, Elin. "Osäkerhet vid översvämningskartering av vattendrag : En kunskapsöversikt och tillämpning på MIKE 11". Thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-237133.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
På grund av osäkerheter i indata, parametrar och modellstruktur kan det finnas stora osäkerheter i översvämningskarteringar. Trots detta sker oftast ingen osäkerhetsanalys vid översvämningskarteringar i praktiken vilket gör att beslutsfattare och andra användare kan uppfatta resultaten som mer korrekta än vad de egentligen är. En orsak till att osäkerhetsanalys ännu inte blivit en vedertagen del i översvämningskarteringar kan vara att modellerare på konsultbyråer och myndigheter inte har tillräcklig kunskap om ämnet. Att tillgången på data kan vara begränsad underlättar inte heller vid osäkerhetsanalyser. Dessutom saknas exempel på hur osäkerheter kan analyseras i MIKE 11, vilket är en av de vanligaste modellerna som används vid översvämningskarteringar på konsultbyråer. Syftet med examensarbetet var tvåfaldigt. Det första var att ge en generell kunskapsöverblick över aktuell forskning om osäkerheter och osäkerhetsanalys vid översvämningskarteringar för att öka kunskapen hos konsulter och beslutsfattare. Det andra syftet var att med ett exempel visa hur osäkerheter kan uppskattas i en översvämningskartering skapad i MIKE 11 då det finns begränsad tillgång på data. En litteraturstudie visade att det ofta finns stora osäkerheter i flödesberäkningar och den geometriska beskrivningen och att det finns väldigt många sätt att analysera dessa på. Några av metoderna som används är Monte Carlo simuleringar, Oskarpa mängder, Scenarioanalys, Bayesiansk kalibrering och Generalized Likelihood Uncertainty Estimation, GLUE. En fallstudie gjordes där en hydraulisk modell av Kungsbackaån skapades med MIKE 11. Den metod som var praktiskt genomförbar att använda för att uppskatta osäkerheterna i detta arbete var scenarioanalys. Totalt utfördes 36 olika modellsimuleringar där kalibreringsflöde, Mannings tal och scenarioflöde varierades. Scenarioanalys ger inte någon exakt beräkning av osäkerheterna utan endast en subjektiv uppskattning. Resultatet av scenarioanalysen visade att då havsnivån i Kungsbackafjorden var 0,92 m skiljde de simulerada vattennivåerna som mest med 1,3 m för 100-årsflödet och med 0,41 m för beräknat högsta flöde, BHF. Även osäkerheterna i utbredningen för de två flödena undersöktes och visade sig vara som störst i flacka områden trots att osäkerheten i vattennivåerna var mindre där.
Due to uncertainty in data, parameters and model structure, there may be large uncertainties in flood inundation models. Despite of this, uncertainty analysis is still rarely used by practitioners when creating flood maps. A reason why uncertainty analysis has not yet become customary in flood inundation modeling may be due to a lack of knowledge. Low availability of data can sometimes also make it more difficult to do an uncertainty analysis. Moreover, no examples exist of how uncertainties can be analyzed in MIKE 11, which is one of the most common models used in flood mapping at consultant agencies. The aim of this study was twofold. Firstly, to provide a general overview of current research on uncertainty and uncertainty analysis for flood inundation modeling. This in order to increase knowledge among consultants and decision makers. Secondly, to give an example of how uncertainties can be estimated in a flood inundation model created in MIKE 11 when there is limited access to data. The research overview showed that there is often considerable uncertainty in the discharge calculations and geometrical description in hydraulic models, and that there are many different ways to analyze the uncertainties. Some methods that are often used are Monte Carlo simulations, fuzzy sets, scenario analysis, Bayesian calibration and Generalized Likelihood Uncertainty Estimation, GLUE. A case study was performed in which a hydraulic model was built for the River Kungsbackaån in MIKE 11. A scenario analysis was carried out to show the uncertainties in the hydraulic model. Overall, 36 different model runs were made in which the calibration discharge, Manning's number and design flow were varied. Scenario analysis cannot provide a precise estimate of the uncertainty, it can only give a subjective estimate. The results of the scenario analysis showed that when the sea level in Kungsbackafjorden was 0,92 m the simulated water levels differed at most by 1,3 m for the 100-year discharge and by 0,41 m for the calculated maximum flow. Also, the flood extent of the two discharges were investigated. The greatest uncertainty in the extent was found in the flat areas even though the uncertainty in water levels was smaller there.
23

Dreborg, Karl Henrik. "Scenarios and structural uncertainty". Doctoral thesis, KTH, Infrastructure, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3697.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Langanay, Jean. "Quantification des incertitudes d'une exploitation d'un gisement d'uranium par Récupération In Situ". Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLM035.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La Lixiviation In-Situ d’uranium, ou In-Situ Recovery (ISR), est basée sur la lixiviation directe des minéraux uranifères au coeur du gisement par une solution minière injectée. Les résultats des écoulements et des réactions chimiques dans le réservoir sont difficiles à prédire en raison des incertitudes géochimiques, pétrophysiques et géologiques. Les codes de simulation de transport réactif utilisés pour modéliser l’ISR sont particulièrement sensibles à la distribution spatiale des propriétés physiques et chimiques dans le dépôt. Ainsi, la modélisation géostatistique est utilisée pour représenter l’incertitude de la répartition spatiale des propriétés géologiques. On peut représenter cette incertitude par un grand nombre de réalisations du modèle géostatistique. La propagation directe des incertitudes géologiques est difficile à résoudre en contexte industriel en raison du temps CPU nécessaire pour effectuer une simulation de l’ISR. Les travaux réalisés dans cette thèse présentent différents moyens de propager l’incertitude géologique en incertitude sur la production d’uranium avec un coût en temps de calcul réduit. On utilise pour cela la méthode de réduction de scénarios, qui permet de propager parcimonieusement l’incertitude. Un sous-ensemble de simulations géostatistiques est sélectionné pour approximer la variabilité d’un ensemble plus large. La sélection est obtenue en utilisant un proxy de la simulation de transport réactif. La principale contribution de ce travail est la construction de différents proxys pour approximer la lixiviation de l’uranium. Ils permettent de reproduire la dissimilarité des réalisations en terme de production d’uranium. Ensuite, les simulations de l’ISR effectuées dans les réalisations géostatistiques sélectionnées donnent une approximation de la variabilité de production d’uranium de l’ensemble des réalisations. Cette approximation est enfin utilisée pour quantifier les incertitudes de la production d’uranium sur des cas réels. Finalement, la propagation de l’incertitude de production de l’échelle du bloc de production à l’échelle de la mine est développée. Par ailleurs, un travail exploratoire a été mené dans le but d’utiliser des modèles de substitution du solveur de la chimie pour accélérer les simulations de transport réactif
Uranium In Situ Recovery (ISR) is based on the direct leaching of the uranium ore in the deposit by a mining solution. Fluid flow and geochemical reaction in the reservoir are difficult to predict due to geological, petrophysical and geochemical uncertainties. The reactive transport simulation code used to model ISR is very sensitive to the spatial distribution of the physical and chemical properties of the deposit. Geostatistical models are used to represent the uncertainty of the spatial distribution of geological properties. The direct propagation of geological uncertainties by multiple ISR mining simulations is intractable in an industrial context. This work presents a way to propagate geological uncertainties into uranium production uncertainties at a reduced computational cost, thanks to a scenario reduction method. A subset of geostatistical simulations is built to approximate the variability of a larger set. The selection is obtained using a proxy of reactive transport simulation. The main contribution of this work is the development of different proxys to approximate the uranium leaching. They allow the discrimination of geostatistical realizations in terms of potential uranium production.Then, the ISR simulation carried out with the selected geostatistical realizations gives an approximation of the uranium production variability over the whole set of geostatistical simulations. This approximation is then used to quantify the uncertainties on the uranium production. The proposed approach is assessed on real case studies. Finally, the propagation of the uranium production uncertainty, assessed by the scenario reduction method, on mining operation planning is developped. Furthermore, an exploratory work about the use of statistical meta-models as chemistry solvers is also presented
25

Grabaskas, David. "Analysis of Transient Overpower Scenarios in Sodium Fast Reactors". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1265726176.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Pucci, Nicole Christine. "Intolerance of Uncertainty, Anxiety and Worry in Response to a Novel Induction of Uncertainty". Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1322568818.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Krivtchik, Guillaume. "Analysis of uncertainty propagation in nuclear fuel cycle scenarios". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENI050/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les études des scénarios électronucléaires modélisent le fonctionnement d’un parcnucléaire sur une période de temps donnée. Elles permettent la comparaison de différentesoptions d’évolution du parc nucléaire et de gestion des matières du cycle, depuis l’extraction duminerai jusqu’au stockage ultime des déchets, en se basant sur des critères tels que les puis-sances installées par filière, les inventaires et les flux, en cycle et aux déchets. Les incertitudessur les données nucléaires et les hypothèses de scénarios (caractéristiques des combustibles, desréacteurs et des usines) se propagent le long des chaînes isotopiques lors des calculs d’évolutionet au cours de l’historique du scénario, limitant la précision des résultats obtenus. L’objetdu présent travail est de développer, implémenter et utiliser une méthodologie stochastiquede propagation d’incertitudes dans les études de scénario. La méthode retenue repose sur ledéveloppement de métamodèles de calculs d’irradiation, permettant de diminuer le temps decalcul des études de scénarios et de prendre en compte des perturbations des paramètres ducalcul, et la fabrication de modèles d’équivalence permettant de tenir compte des perturbationsdes sections efficaces lors du calcul de teneur du combustible neuf. La méthodologie de calculde propagation d’incertitudes est ensuite appliquée à différents scénarios électronucléairesd’intérêt, considérant différentes options d’évolution du parc REP français avec le déploiementde RNR
Nuclear scenario studies model nuclear fleet over a given period. They enablethe comparison of different options for the reactor fleet evolution, and the management ofthe future fuel cycle materials, from mining to disposal, based on criteria such as installedcapacity per reactor technology, mass inventories and flows, in the fuel cycle and in the waste.Uncertainties associated with nuclear data and scenario parameters (fuel, reactors and facilitiescharacteristics) propagate along the isotopic chains in depletion calculations, and throughoutthe scenario history, which reduces the precision of the results. The aim of this work isto develop, implement and use a stochastic uncertainty propagation methodology adaptedto scenario studies. The method chosen is based on development of depletion computationsurrogate models, which reduce the scenario studies computation time, and whose parametersinclude perturbations of the depletion model; and fabrication of equivalence model which takeinto account cross-sections perturbations for computation of fresh fuel enrichment. Then theuncertainty propagation methodology is applied to different scenarios of interest, consideringdifferent options of evolution for the French PWR fleet with SFR deployment
28

Akgul, Edvin, e Gabriel Wadsten. "Scenario Planning : Preparing for the future during uncertain times". Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446453.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background and past studies: The effects of uncertain times include fluctuating markets anddemands rapid and agile means to cope with said fluctuations and occurring changes. Scenarioplanning is considered a great tool for coping with uncertainties and preparing means for futureevents. Purpose: The purpose of this study is to study how businesses implement and use scenarioplanning as a tool to minimize uncertainty in a volatile environment. Research question: How is Scenario planning utilized to minimize uncertainties in a volatileenvironment in large organizations within Sweden? Method: The study implements a qualitative approach where semi-structured interviews wereconducted with members within the top management of said seven companies. Complementarydocuments shared by the respondents have been utilized. Results and Conclusions: The results indicate that the organizations mainly conduct threescenarios ranked by either impact or probability of occurrence. The main purpose of scenarioplanning is readiness for action in case of sudden deviations with less focus on prediction offuture deviations. The process of conducting scenario planning is interactive and multi-leveledwithin the organizations with mainly a bottom-up approach.
29

Le, Guenedal Théo. "Financial Modeling of Climate-related Risks". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAG009.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce projet de recherche est consacré à l'estimation des risques financiers liés au changement climatique. Au-delà des applications et des résultats quantitatifs, les chapitres de cette thèse ont pour principal objectif d'apporter des méthodologies générales utilisables par les praticiens. Le premier chapitre propose une méthode d'évaluation bottom-up du risque de transition adjointe aux modèles de risque classiques. Cette approche du risque opérationnel par les coûts engendrés par une taxe potentielle limite l'impact aux secteurs directement polluants, ce qui amène au deuxième chapitre, introduisant les tables d'entrée-sorties pour appréhender les effets indirects du risque de transition dans la chaîne d'approvisionnement. Ces approches offrent une structure statique permettant d'évaluer le risque dans un scénario donné, mais pas de déterminer le prix des obligations en considérant des scénarios hétérogènes et leur probabilité de réalisation. Pour ce faire, le troisième chapitre propose un modèle de pricing intégrant une approche bayésienne dans la mise à jours des probabilités de scénarios sur la base des sauts observés dans les mécanismes de tarification du carbone. Enfin, le dernier chapitre propose une méthodologie Monte-Carlo de simulation de dommages annuels causées par des cyclones tropicaux. La conversion des données climatiques brutes en base de données synthétique de sinistres est réalisée en couplant des relations statistiques et thermodynamiques. L'exposition des actifs physiques, les dynamiques des facteurs socio-économiques, les densités de populations locales et les vulnérabilités spécifiques aux différentes régions du monde sont empruntés à différents segments de la littérature. Ils sont combinés afin d'obtenir un modèle complet du triptyque classique nécessaire à l'étude des risques physiques : 'intensité' x 'exposition' x 'vulnérabilité' généralisable et homogène sur l'ensemble des pays. Le signal résultant peut ensuite être inclus simplement dans des modèles de risque de crédit assimilant les dommages annualisés à de la dette additionnelle
This research project aims at estimating financial risks related to climate change. Beyond the applications and quantitative findings, the main objective of the chapters of this thesis is to provide a structural and methodological framework that is generalizable, in order to facilitate their integration by practitioners. The first chapter proposes a bottom-up measure of transition risk, which can be incorporated with classical risk models (Merton or credit risk model). This cost-based approach is limited to the directly polluting sectors, which leads to the second chapter, which allows for the diffusion of transition risk through the value chain. These approaches offer a static structure that allows for a fixed scenario stress-test but not for pricing the bonds by considering heterogeneous scenarios and the probability of realization. To this end, chapter three proposes a pricing model that integrates a Bayesian approach in updating scenario probabilities based on observed jumps in carbon pricing mechanisms. Finally, the last chapter proposes a Monte-Carlo methodology for simulating annual damages caused by tropical cyclones. The conversion of raw climatic data into a synthetic database of losses is achieved by coupling statistical and thermodynamic relationships. The exposure of physical assets, the dynamics of socio-economic factors, local population densities and specific vulnerabilities in different regions of the world are borrowed from different segments of the literature, and combined to obtain a complete model of the classical triptych necessary for the study of physical hazards: hazard intensity x exposure x vulnerability generalizable and homogeneous across countries. The resulting signal can then be simply included in credit risk models equating annualized damages with additional debt
30

Patricksson, Øyvind Selnes. "Semi-Submersible Platform Design to Meet Uncertainty in Future Operating Scenarios". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18563.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This master thesis in marine systems design is about how to assess the future uncertainty in a design setting, or as the topic puts it; semi-submersible platform design to meet uncertainty in the future operation scenarios. Central terms that will be discussed are robustness, flexibility, adaptability, and real options, so-called ilities. Also, methods for evaluation of designs in relation to ilities and future uncertainty are presented.The background for this thesis is the ever importance of a good assessment of investment projects in the offshore business in general, and more specific in relation to designs subjected to different forms of ilities. Now, more than ever, it is crucial to make the right decisions when designing an offshore construction, to ensure that an investment is viable. This thesis has used the concept of an intervention semi, provided by Aker Solutions, to assess problems related to these aspects. At first, design drivers for the concept were identified. These were found to be cost, weigh and operability, where (total) cost and (total) weight are strictly correlated. Operability, meaning the ability to keep operations running in different conditions and situations, are mainly dependent on motion characteristics and layout, where vertical motions were found to be the most important. The properties of the intervention semi was presented as a functional breakdown, divided in five main categories; well intervention, drilling, power generation, station keeping and transit, and other functions. The last category, the one called other functions, incorporated accommodation, ballast and bilge water systems, and heave compensation system. Most relevant for the intervention concept are the intervention functions and drilling functions. Of well intervention procedures, the concept should be able to do wireline operations, coiled tubing operations, and for drilling, through tubing rotary drilling will be the main procedure. After presenting the properties for the intervention semi concept, aspects of changing requirements due to uncertainty in the future, were discussed. The design functions of changing requirements identified were operation method and technology, environment and legislation, area of operation, and economics. Following this, a discussion of how to accommodate for these changing requirements were presented, with focus on aspects regarding flexibility, robustness, adaptability, and real options. After these terms and aspects had been discussed, an evaluation of the concept in relation to the ilities presented was done. Most relevant was the possibility of a development of the coiled tubing equipment, the aspect of managed pressure drilling as a function that might be needed in the future, and the use of rental equipment. Also, ilities were identified and discussed in a concept similar to the intervention semi presented in this thesis. From this, it was found that functions related to the environment (regarding emissions) would be a potential area of ilities, due to the continually increasing focus on such matters, and by having functions related to this designed with ilities, It would make it easier to improve these functions at a later time. Also, the aspect of extra deck space was discussed, which will give the design better flexibility, and in general, it was found that flexibility in the procedures for intervention and drilling operation was important for this concept. Some functions and aspects were also found not to be relevant for any sort of ilities. Among these were functions related to heavy drilling, increased water depth and the aspect of ice class.To find the value of a design with functional ilities, different methods and aspects were presented. At first, economical aspects were discussed, and methods using net present value were found to be relevant in relation to the valuation of ilities. Another approach discussed was scenario development and assessment, where in particular one method was found relevant. This method proposes to find an optimal design for the scenario assumed most probable, and then test this design against the other possible scenarios (using the models as simulation models) to get an impression of the resilience of the designs. Two decision support models were proposed, Model 1 and Model 2. The first model presented, Model 1, can be described as a “hybrid” decision model, part static, part dynamic, where an optimal design is found for a set of contracts, taking real options into consideration. The contracts should reflect the future, and from a set of base designs, with varying possibilities for functions and options, a design with an optimal combination of capabilities and options will be the result of solving the problem. Model 2 is sort of a static variant of Model 1, where the possibility of real options is no longer available. The model will still find a design with an optimal combination of capabilities for a set of contracts, but all capabilities must be part of the construction initially.Further, the two models are implemented for use in a commercial solver, and parameters and constraints are discussed. These implemented models were then used for the illustrative cases.The case studies illustrate how the two models presented can be utilised, and in addition illustrate how the scenario assessment discussed earlier can be combined with the decision support models. There are mainly three cases presented; two where Model 1 is used, and a third, where Model 2 is used. In Case 1 there are three base designs, with different characteristics, and one only attribute (supplementary function) that should be assessed. Three scenarios are presented as a basis for the contract generation. First, an optimal design solution was found for each scenario (Case 1a, Case 1b and Case 1c). Secondly, a scenario assessment was done, where the solution from the scenario assumed most probable is tested against the other two scenarios using the model as a simulation model rather than an optimisation model. Scenario 1 was assumed to be the most probable one, represented by Case 1a, and the optimal solution for this case was Design 1. This design was then tested against the two other scenarios, and it came out with a rather good result, illustrating the resilience of the chosen design. Case 2 illustrated a more complex problem, where an optimal solution should be found among 16 different base designs and four possible attributes. The attributes could either be part of the design initially or made as options that can be realised at a later time. The instance tested is assumed to be somewhat more complex than a commercial problem, but illustrates in a good way the capability of Model 1. Case 3 is an example of how Model 2 can be used. In Case 3a, only one base design is available, and with a set of four possible attributes, an optimal design should be found. Due to the “static” character of Model 2, the attributes can only be part of the initial design. Case 3b is much the same, except here there are two base designs to choose among, in addition to the four attributesA computational study was carried out, using Model 1, and only this, as it is assumed to be the most complex of the two models. The test incident assumed most relevant, with 100 contracts, four base designs, and eight attributes, can be solved one time in on the average less than two seconds, and for a full scenario analysis, consisting of about 1000 runs, the analysis will take about half an hour.As a concluding remark for this thesis, I will say that the main scope, which I in my opinion was to discuss how different design solutions can be evaluated in relation to future uncertainty, was answered in a good way with the two decision models proposed together with how these could be used in a scenario setting.
31

Girod, Bastien. "Why six baseline scenarios? a research on the reasons for the growing baseline uncertainty of the IPCC scenarios /". Zürich : ETH, Eidgenössische Technische Hochschule Zürich, HES, Institute for Human-Environment Systmes, 2006. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=277.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Kuhlmann, Salas Claudio Andrés. "Ellipsoidal forest and wildland fire scar scenarios for strategic forest management planning under uncertainty". Tesis, Universidad de Chile, 2014. http://repositorio.uchile.cl/handle/2250/131350.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Magister en Gestión de Operaciones
Ingeniero Civil Industrial
La importancia que ha tomado la conservación del medioambiente ha ido en aumento, lo que ha afectado directamente en los objetivos y forma de operar de las organizaciones. Es por esto que la interacción entre la operación y el desarrollo del ecosistema debe ser considerada para balancear la sustentabilidad y conservación con los objetivos productivos, siendo las perturbaciones forestales un punto de gran interés. Incendios, plagas, erupciones volcánicas e inundaciones son algunas de las perturbaciones al ecosistema que afectan la productividad del bosque. Por lo tanto, reducir el riesgo y las consecuencias de estos episodios es clave para la industria. El objetivo es crear una metodología que permita generar escenarios de incendios elipsoidales para su utilización en la toma de decisiones en el manejo de incendios y recursos forestales. Para esto se utilizan incendios elípticos generados a través de un simulador, los cuales, siguiendo el método de Monte Carlo, son asignados a uno de los patrones representativos de incendio previamente definidos, utilizando la distancia de Pompeiu-Hasudorff. La probabilidad de ocurrencia de los patrones representativos es obtenida al dar cuenta de la cantidad de simulaciones asignada a cada uno de ellos. Para dar con un algoritmo que permitiera utilizar los recursos computacionales de forma eficiente se implementaron distintos métodos para el cálculo de la distancia de Pompeiu-Hausdorff, además de utilizar múltiples procesadores en paralelo cuando esto fuese posible. Cinco métodos fueron implementados, los cuales son definidos utilizando las propiedades geométricas de las elipses para lograr resolver el problema de optimización implícito. El método que logra dar con los resultados más exactos para la distancia hace uso de optimización cónica, mientras que el más rápido calcula la distancia entre cada uno de los vértices de una elipse discretizada. Haciendo uso de estos dos métodos, se genera una estrategia multi etapa para el cálculo de la distancia de Pompeiu-Hasdorff entre dos elipses que es eficiente y precisa. La estabilidad de los resultados obtenidos para 200 patrones es lograda luego de 100,000 sampleos, sin embargo, se observaron variaciones muy pequeñas incluso después de 20,000 simulaciones. En conclusión, los intervalos de confianza obtenidos para las probabilidades calculadas dependen de los recursos computacionales con los que se cuente y de las restricciones de tiempo que puedan ser impuestas. La metodología desarrollada entrega a los planificadores forestales una herramienta para analizar la probabilidad de incendio de zonas determinadas, las cuales pueden ser utilizadas en un modelo de optimización bajo incertidumbre que les permita manejar los recursos disponibles de la mejor forma posible.
33

Zhou, Weifeng. "Resilience analysis of nuclear fuel cycle scenarios". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALI055.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les systèmes du cycle du combustible nucléaire, composés de réacteurs, de divers combustibles et de différentes installations de cycle, sont complexes et en constante évolution. Grâce à leur capacité à faire des projections de stratégies industrielles et à évaluer les impacts associés sur le système du cycle du combustible nucléaire, les scénarios électronucléaires sont considérés comme un outil puissant d'aide à la décision. Les études de scénarios aident les décideurs à identifier les forces et les faiblesses des différentes stratégies d'évolution d’un parc nucléaire et puis à proposer des trajectoires d'évolution possibles pour l'industrie nucléaire en fonction des contraintes de la physique, de l'économie, de l'industrie, etc.Cependant, les études de scénarios sont généralement soumises à différents types d'incertitudes, en particulier la soi-disant « incertitude profonde » (« deep uncertainty » en anglais). Ce concept fait référence à des « inconnues inconnues », auxquelles les résultats de l'étude de scénarios ne conviennent pas. En effet, sous l'impact des incertitudes profondes, c'est-à-dire les disruptions, les trajectoires proposées par les études de scénarios peuvent devenir invalides : elles ne satisfont plus aux contraintes du scénario.Afin de rendre les trajectoires valides à nouveau après une disruption due à l'incertitude, la première possibilité est d'étudier la stratégie de résistance. La stratégie de résistance consiste à trouver des trajectoires qui restent valides sous l'impact de l'incertitude sans réajustements exogènes des trajectoires. Cependant, les capacités de résistance des scénarios sont limitées : la résistance n'est adaptée qu'aux incertitudes à faible impact, alors que l'impact d'une incertitude profonde est généralement fort.Comme solution complémentaire à la stratégie de résistance, nous proposons d'utiliser les stratégies de résilience. Les stratégies de résilience consistent à utiliser des mesures préconçues, appelées « leviers », pour réajuster la trajectoire lorsque la stratégie de résistance est insuffisante. Nous cherchons à utiliser l'effet des réajustements exogènes des trajectoires, qui sont introduits à travers les leviers, pour contrebalancer l'impact de la disruption et garder la trajectoire valide. Pour évaluer la résilience des scénarios, nous avons développé un cadre d'analyse de résilience, basé sur l'algorithme SUR (Stepwise Uncertainty Reduction).Nous avons appliqué la stratégie de résilience développée à deux problèmes de scénario dans lesquels un parc nucléaire français simplifié avec une réduction de puissance incertaine est considéré. Pour définir la validité des trajectoires, nous avons imposé cinq contraintes sur le taux d'utilisation des usines de retraitement, la séparation du plutonium, la teneur de plutonium dans le combustible MOX et le stockage du combustible usé. Dans chaque problème, nous avons donné une trajectoire préalable supposée à la suite d'une étude de scénarios avec une hypothèse pour maintenir la puissance installée constante à l’avenir. Nous avons supposé qu'à la suite de la disruption du contexte de l'étude, la puissance électrique totale est disruptée et réduite à l’avenir. Les résultats ont montré que les trajectoires préalables dans les deux problèmes sont résilientes vis-à-vis des disruptions supposées : il est possible de maintenir les trajectoires préalables valides en réajustant le retraitement et les charges de combustible MOX dans les réacteurs. Ces résultats démontrent que les évolutions du parc nucléaire dans les trajectoires préalables sont flexibles face à la disruption de la puissance électrique totale
Nuclear fuel cycle systems, composed of reactors, various fuels, and different cycle facilities, are complex and in constant evolution. Thanks to their abilities to make projections of industrial strategies and to assess the associated impacts on nuclear fuel cycle systems, nuclear fuel cycle scenarios are considered as a powerful tool for decision-making analyses. Scenario studies assist decision-makers in identifying the strengths and weaknesses of different strategies for a nuclear fleet evolution and then proposing possible evolution trajectories for the nuclear industry according to constraints from physics, economics, industry, etc.However, scenario studies are usually subject to different kinds of uncertainties, especially the so-called “deep uncertainty.” This concept refers to “unknown unknowns,” which scenario study results are unsuited to address. Indeed, under the impact of deep uncertainty, i.e., disruptions, the trajectories proposed by the scenario studies can become invalid: they do not satisfy the scenario constraints anymore.In order to make the trajectories valid again after disruption due to uncertainty, the first possibility is to study the resistance strategy. The resistance strategy consists of finding scenario trajectories that remain valid under the impact of uncertainty without exogenous readjustments of trajectories. However, the resistance capabilities of scenarios are limited: resistance is only adapted to uncertainties with small impact, while the impact of deep uncertainty is usually strong.As a complementary solution to the resistance strategy, we propose using resilience strategies. The resilience strategies consist of using predesigned measures, called “levers,” to readjust the scenario trajectory when the resistance strategy is insufficient. We aim to use the effect of the exogenous readjustments of trajectories, which are introduced through the levers, to counterbalance the impact of disruption and remain the trajectory valid. To evaluate the resilience of scenarios, we developed a resilience analysis framework, based on the start-of-the-art SUR (Stepwise Uncertainty Reduction) algorithm.We applied the developed resilience strategy to two scenario problems in which a simplified French nuclear fleet with uncertain power reduction is considered. To define the validity of trajectory, we imposed five constraints about the reprocessing plant utilization ratio, plutonium separation, plutonium content in MOX fuel, and spent fuel storage. In each problem, we gave a prior trajectory supposed as a result of a scenario study with a hypothesis to keep the installed power constant in the future. We assumed that following the disruption of the study context, the total electricity power is disrupted and reduced in the future. The results showed that the prior trajectories in both problems are resilient for the assumed disruptions: it is possible to keep the prior trajectories valid by readjusting the reprocessing and the MOX fuel loadings in reactors. Such results demonstrate the evolutions of the nuclear fleet in the prior trajectories are flexible in front of the disruption of total electricity power
34

Panichprecha, Sorot. "Abstracting and correlating heterogeneous events to detect complex scenarios". Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/26737/1/Sorot_Panichprecha_Thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The research presented in this thesis addresses inherent problems in signaturebased intrusion detection systems (IDSs) operating in heterogeneous environments. The research proposes a solution to address the difficulties associated with multistep attack scenario specification and detection for such environments. The research has focused on two distinct problems: the representation of events derived from heterogeneous sources and multi-step attack specification and detection. The first part of the research investigates the application of an event abstraction model to event logs collected from a heterogeneous environment. The event abstraction model comprises a hierarchy of events derived from different log sources such as system audit data, application logs, captured network traffic, and intrusion detection system alerts. Unlike existing event abstraction models where low-level information may be discarded during the abstraction process, the event abstraction model presented in this work preserves all low-level information as well as providing high-level information in the form of abstract events. The event abstraction model presented in this work was designed independently of any particular IDS and thus may be used by any IDS, intrusion forensic tools, or monitoring tools. The second part of the research investigates the use of unification for multi-step attack scenario specification and detection. Multi-step attack scenarios are hard to specify and detect as they often involve the correlation of events from multiple sources which may be affected by time uncertainty. The unification algorithm provides a simple and straightforward scenario matching mechanism by using variable instantiation where variables represent events as defined in the event abstraction model. The third part of the research looks into the solution to address time uncertainty. Clock synchronisation is crucial for detecting multi-step attack scenarios which involve logs from multiple hosts. Issues involving time uncertainty have been largely neglected by intrusion detection research. The system presented in this research introduces two techniques for addressing time uncertainty issues: clock skew compensation and clock drift modelling using linear regression. An off-line IDS prototype for detecting multi-step attacks has been implemented. The prototype comprises two modules: implementation of the abstract event system architecture (AESA) and of the scenario detection module. The scenario detection module implements our signature language developed based on the Python programming language syntax and the unification-based scenario detection engine. The prototype has been evaluated using a publicly available dataset of real attack traffic and event logs and a synthetic dataset. The distinct features of the public dataset are the fact that it contains multi-step attacks which involve multiple hosts with clock skew and clock drift. These features allow us to demonstrate the application and the advantages of the contributions of this research. All instances of multi-step attacks in the dataset have been correctly identified even though there exists a significant clock skew and drift in the dataset. Future work identified by this research would be to develop a refined unification algorithm suitable for processing streams of events to enable an on-line detection. In terms of time uncertainty, identified future work would be to develop mechanisms which allows automatic clock skew and clock drift identification and correction. The immediate application of the research presented in this thesis is the framework of an off-line IDS which processes events from heterogeneous sources using abstraction and which can detect multi-step attack scenarios which may involve time uncertainty.
35

Panichprecha, Sorot. "Abstracting and correlating heterogeneous events to detect complex scenarios". Queensland University of Technology, 2009. http://eprints.qut.edu.au/26737/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The research presented in this thesis addresses inherent problems in signaturebased intrusion detection systems (IDSs) operating in heterogeneous environments. The research proposes a solution to address the difficulties associated with multistep attack scenario specification and detection for such environments. The research has focused on two distinct problems: the representation of events derived from heterogeneous sources and multi-step attack specification and detection. The first part of the research investigates the application of an event abstraction model to event logs collected from a heterogeneous environment. The event abstraction model comprises a hierarchy of events derived from different log sources such as system audit data, application logs, captured network traffic, and intrusion detection system alerts. Unlike existing event abstraction models where low-level information may be discarded during the abstraction process, the event abstraction model presented in this work preserves all low-level information as well as providing high-level information in the form of abstract events. The event abstraction model presented in this work was designed independently of any particular IDS and thus may be used by any IDS, intrusion forensic tools, or monitoring tools. The second part of the research investigates the use of unification for multi-step attack scenario specification and detection. Multi-step attack scenarios are hard to specify and detect as they often involve the correlation of events from multiple sources which may be affected by time uncertainty. The unification algorithm provides a simple and straightforward scenario matching mechanism by using variable instantiation where variables represent events as defined in the event abstraction model. The third part of the research looks into the solution to address time uncertainty. Clock synchronisation is crucial for detecting multi-step attack scenarios which involve logs from multiple hosts. Issues involving time uncertainty have been largely neglected by intrusion detection research. The system presented in this research introduces two techniques for addressing time uncertainty issues: clock skew compensation and clock drift modelling using linear regression. An off-line IDS prototype for detecting multi-step attacks has been implemented. The prototype comprises two modules: implementation of the abstract event system architecture (AESA) and of the scenario detection module. The scenario detection module implements our signature language developed based on the Python programming language syntax and the unification-based scenario detection engine. The prototype has been evaluated using a publicly available dataset of real attack traffic and event logs and a synthetic dataset. The distinct features of the public dataset are the fact that it contains multi-step attacks which involve multiple hosts with clock skew and clock drift. These features allow us to demonstrate the application and the advantages of the contributions of this research. All instances of multi-step attacks in the dataset have been correctly identified even though there exists a significant clock skew and drift in the dataset. Future work identified by this research would be to develop a refined unification algorithm suitable for processing streams of events to enable an on-line detection. In terms of time uncertainty, identified future work would be to develop mechanisms which allows automatic clock skew and clock drift identification and correction. The immediate application of the research presented in this thesis is the framework of an off-line IDS which processes events from heterogeneous sources using abstraction and which can detect multi-step attack scenarios which may involve time uncertainty.
36

Cornelio, Cristina. "Preference reasoning and aggregation over combinatorial domains in uncertain and multi-agent scenarios". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3427129.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Preferences, intended as user opinions over items, are conspicuously present in our lives and recently became widely studied in Artificial Intelligence. In many contexts of our life, we do not consider items only as entire entities, but we consider a set of features/attributes that characterize them and that interact to each other. Therefore, we are particularly interested in this kind of scenarios representing conditional preferences over combinatorial and multi attribute domains. The ability of representing preferences in a compact way is essential, especially in the context of multi-attribute preference modelling and reasoning since it causes combinatorial explosion of information and a high computational cost. For this purpose, a number of compact representation languages have been developed in the literature. We initially focus on a recently developed frame- work for representing conditional preferences over a graphical structure: conditional preference networks (CP-nets). We analyse the advantages and the drawbacks of CP-nets focusing on uncertain and multi-agent scenarios. Real life scenarios are often dynamic and uncertain: a user can change her mind over time and her preferences could be affected by errors or noise. We propose the new PCP-net structure (probabilistic conditional preference networks), as a generalisation of the CP-nets framework, able to respond to change through updates and to support probabilistic informations. PCP-nets can be used also to represent a multi-agent scenario where each agent is represented by a CP-net and probabilities are used to reconcile conflicts between users. We analyse another compact representation language, similar to CP-nets, namely soft con- straints. Soft constraints are less restrictive, with respect to CP-nets, but the computational complexity remains almost the same for the main tasks. For this reason, we rethink the multi-agent context by using a profile of agents expressing their preferences via soft constraints, instead of via a collection of CP-nets aggregated into a PCP-net. The CP-nets literature presents also many other generalisations, since CP-nets are restrictive and limited in expressiveness. For example, CP-nets have been extended with constraints, with local inconsistency and incompleteness (GCP-nets), with utility functions (UCP-nets), etc. Therefore, several different frameworks were developed to describe conditional preferences. Each of these formalisms have ad hoc syntax, semantics and specialised algorithms. In this thesis, we specify a new framework with an unification purpose of all these models. The strength of our formulation is the direct expression of the model in the standard first order logic, as constrained Datalog theories. This formulation is rich enough to express CP-nets and all its extensions. We conclude this work, studying an application of preferences in a real-life scenario. We analyse how to improve kidney exchanging algorithms increasing the number of transplantations and the expected life duration, providing some encouraging results. Then we provide also some preliminary ideas about how to incorporate preferences in the matching procedure currently used.
Le preferenze, intese come opinioni di utenti su un insieme di oggetti, sono ampiamente presenti nelle nostre vite e recentemente sono diventate molto studiate in Intelligenza Artificiale. In molti contesti della nostra vita, non consideriamo gli oggetti come entità atomiche, ma consideriamo un insieme di caratteristiche/attributi che le caratterizzano e che interagiscono tra di loro. Siamo dunque particolarmente interessati in questi tipi di scenario: preferenze condizionali su domini combinatori e multi-attributo. L’abilità di rappresentare le preferenze in maniera compatta è essenziale, in particolare nel contesto di modellazione e ragionamento con preferenze multi-attributo, in presente un esplosione combinatoria di informazione che causa un alto costo computazionale. A questo scopo sono state sviluppate in letteratura una serie di linguaggi di rappresentazione compatta. In questa tesi ci focalizziamo inizialmente su un modello grafico di rappresentazione delle preferenze condizionali: le conditional preference networks (CP-nets). Analizziamo quindi i vantaggi e gli svantaggi delle CP-net concentrandoci su scenari incerti e multi-agente. Gli scenari reali sono spesso dinamici e incerti: un utente può cambiare le sue idee nel tempo e le sue preferenze potrebbero essere affette da errori o rumore. In questa tesi proponiamo una nuova strutture chiamata PCP-net (probabilistic conditional preference networks), generalizzazione delle CP-net, capace di rispondere ai cambiamenti attraverso aggiornamenti e di supportare informazione probabilistica. Le PCP-net possono essere usate anche per rappresentare i contesti multi-agente dove ogni agente è rappresentato da una CP-net e le probabilità vengono usate per riconciliare i conflitti tra gli utenti. In questa tesi analizziamo anche un altro linguaggio di rappresentazione compatta, simile alle CP-net: i vincoli soft. I vincoli soft sono meno restrittivi rispetto alle CP-net, ma le complessità computazionali rimangono le stesse per i task principali. Per questo motivo, ripensiamo lo scenario multi-agente usando un profilo di agenti che esprimono le loro preferenze attraverso vincoli soft, invece che tramite una collezione di CP-net poi aggregate in una PCP-net. La letteratura riguardante le CP-net presenta anche molte altre generalizzazioni, poichè le CP-net sono per certi versi restrittive e limitate nell’espressività. Ad esempio, le CP-net sono state estese con vincoli, con inconsistenza locale e incompletezza (GCP-net), con funzioni di utilità (UCP-net), etc. Sono dunque stati sviluppati molti formalismi differenti per descrivere le preferenze condizionali e ognuno di essi ha una sintassi e una semantica ah hoc e algoritmi specifici. In questa tesi specifichiamo un nuovo framework con lo scopo di unificare tutti questi modelli. La forza della nostra formulazione è la diretta espressione del modello in logica standard al primo ordine, come una teoria Datalog vincolata. Questa formulazione è ricca abbastanza da esprimere le CP-nets e tutte le sue estensioni. Concludiamo la tesi, studiando un applicazione delle preferenze in un scenario reale. Analizziamo come migliorare gli algoritmi di scambi di reni aumentando il numero di trapianti e di durata di vita aspettata, fornendo alcuni risultati incoraggianti. Quindi forniamo anche alcune idee preliminari riguardo a come incorporare le preferenze nelle procedure di matching attualmente utilizzate.
37

Salas, Bravo Pablo Andres. "The effects of uncertainty in the technological transitions of the power sector : endogenous emissions scenarios up to 2050". Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/265885.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
By August 2016, 180 countries have signed the Paris Agreement and committed to holding the increase in the global average temperature to well below 2degC above pre-industrial levels. Abiding by the agreement will require a substantial reduction of emissions over the next few decades and near zero emissions of CO2 and other long-lived greenhouse gases by the end of this century. In this context, the decarbonisation of the global power sector is of strategic importance, because low-carbon electricity has system-wide benefits that go beyond the electricity sector, enabling significant reductions of CO2 emissions in the industry, transport and buildings sectors. To make the necessary changes depends partly on improving the analysis and estimates of the economics of climate change, and for that there is an urgent need for a new generation of models that give a more accurate picture of the potential decarbonisation pathways. The technological transition towards a low carbon power sector depends on many uncertain factors, such as policy efficiency, renewable energy investment and availability of energy resources. The knowledge about how these uncertain factors interact, and the impacts on the technological evolution of the energy sector, are the key to creating successful policies for driving the economy towards a cleaner, low carbon society. In this context, the work presented here provides decarbonisation scenarios of the global power sector, under uncertain drivers of technological change, and in doing so, enables a better understanding of technology diffusion process in the power sector. The scenarios are created using the FTT:Power model, a representation of global power systems based on market competition, induced technological change and natural resource use and depletion. The scenarios analysed in this dissertation are focused on four drivers of technological change: energy policy, energy resource availability, learning and investment. The influence of uncertainty on each of these drivers is analysed in detail, through endogenous emission scenarios of the global power sector between 2016 and 2050. Emission pathways with uncertainty ranges, as well as policy recommendations, are presented as a result of the modelling exercise.
38

Eisenreich, Katrin Verfasser], e Volker [Akademischer Betreuer] [Markl. "Database Support for Uncertain Data Analysis and Correlation Handling in Scenario Planning / Katrin Eisenreich. Betreuer: Volker Markl". Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2013. http://d-nb.info/1033640387/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Lind, Mårten. "Opportunities and uncertainties in the early stages of development of CO2 capture and storage". Doctoral thesis, KTH, Energiprocesser, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10985.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The topic of this thesis is carbon dioxide (CO2) capture and storage (CCS), which is a technology that is currently being promoted by industries, scientists and governments, among others, in order to mitigate climate change despite a continued use of fossil fuels. Because of the complex nature of CCS and the risks it entails, it is controversial. The aim of this thesis is to analyse how the technology may be further developed in a responsible manner. In the first part of the thesis different methods for capturing CO2 from industrial processes as well as power plants are analysed. The aim is to identify early opportunities for CO2 capture, which is considered important because of the urgency of the climate change problem. Three potential early opportunities are studied: i) capturing CO2 from calcining processes such as cement industries by using the oxyfuel process, ii) capturing CO2 from pressurised flue gas, and iii) capturing CO2 from hybrid combined cycles. Each opportunity has properties that may make them competitive in comparison to the more common alternatives if CCS is realised. However, there are also drawbacks. For example, while capturing CO2 from pressurised flue gas enables the use of more compact capture plant designs as well as less expensive and less toxic absorbents, the concept is neither suitable for retrofitting nor has it been promoted by the large and influential corporations. The second part of the thesis has a broader scope than the first and is multidisciplinary in its nature with inspiration from the research field of Science and Technology Studies (STS). The approach is to critically analyse stakeholder percep-tions regarding CCS, with a specific focus on the CCS experts. The thesis sheds new light on the complexity and scientific uncertainty of CCS as well as on the optimism among many of its proponents. Because of the uncertain development when it comes to climate change, fossil fuel use and greenhouse gas emissions, the conclusion is that CCS has to be further developed and demonstrated. A responsible strategy for a future development of CCS would benefit from: i) a search for win-win strategies, ii) increasing use of appropriate analytical tools such as life-cycle analysis, iii) a consideration of fossil fuel scarcity and increasing price volatility, iv) funding of unbiased research and v) increasing simultaneous investments in long-term solutions such as renewable energy alternatives and efficiency improvements.
QC 20100727
40

Lim, Dongwook. "A systematic approach to design for lifelong aircraft evolution". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28280.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M. S.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: Costello, Mark; Committee Member: Nam, Taewoo; Committee Member: Schrage, Daniel.
41

Zurell, Damaris. "Integrating dynamic and statistical modelling approaches in order to improve predictions for scenarios of environmental change". Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5684/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Species respond to environmental change by dynamically adjusting their geographical ranges. Robust predictions of these changes are prerequisites to inform dynamic and sustainable conservation strategies. Correlative species distribution models (SDMs) relate species’ occurrence records to prevailing environmental factors to describe the environmental niche. They have been widely applied in global change context as they have comparably low data requirements and allow for rapid assessments of potential future species’ distributions. However, due to their static nature, transient responses to environmental change are essentially ignored in SDMs. Furthermore, neither dispersal nor demographic processes and biotic interactions are explicitly incorporated. Therefore, it has often been suggested to link statistical and mechanistic modelling approaches in order to make more realistic predictions of species’ distributions for scenarios of environmental change. In this thesis, I present two different ways of such linkage. (i) Mechanistic modelling can act as virtual playground for testing statistical models and allows extensive exploration of specific questions. I promote this ‘virtual ecologist’ approach as a powerful evaluation framework for testing sampling protocols, analyses and modelling tools. Also, I employ such an approach to systematically assess the effects of transient dynamics and ecological properties and processes on the prediction accuracy of SDMs for climate change projections. That way, relevant mechanisms are identified that shape the species’ response to altered environmental conditions and which should hence be considered when trying to project species’ distribution through time. (ii) I supplement SDM projections of potential future habitat for black grouse in Switzerland with an individual-based population model. By explicitly considering complex interactions between habitat availability and demographic processes, this allows for a more direct assessment of expected population response to environmental change and associated extinction risks. However, predictions were highly variable across simulations emphasising the need for principal evaluation tools like sensitivity analysis to assess uncertainty and robustness in dynamic range predictions. Furthermore, I identify data coverage of the environmental niche as a likely cause for contrasted range predictions between SDM algorithms. SDMs may fail to make reliable predictions for truncated and edge niches, meaning that portions of the niche are not represented in the data or niche edges coincide with data limits. Overall, my thesis contributes to an improved understanding of uncertainty factors in predictions of range dynamics and presents ways how to deal with these. Finally I provide preliminary guidelines for predictive modelling of dynamic species’ response to environmental change, identify key challenges for future research and discuss emerging developments.
Das Vorkommen von Arten wird zunehmend bedroht durch Klima- und Landnutzungswandel. Robuste Vorhersagen der damit verbundenen Arealveränderungen sind ausschlaggebend für die Erarbeitung dynamischer und nachhaltiger Naturschutzstrategien. Habitateignungsmodelle erstellen statistische Zusammenhänge zwischen dem Vorkommen einer Art und relevanten Umweltvariablen und erlauben zügige Einschätzungen potentieller Arealveränderungen. Dabei werden jedoch transiente Dynamiken weitgehend ignoriert sowie demographische Prozesse und biotische Interaktionen. Daher wurden Vorschläge laut, diese statistischen Modelle mit mechanistischeren Ansätzen zu koppeln. In der vorliegenden Arbeit zeige ich zwei verschiedene Möglichkeiten solcher Kopplung auf. (i) Ich beschreibe den sogenannten ‚Virtuellen Ökologen’-Ansatz als mächtiges Validierungswerkzeug, in dem mechanistische Modelle virtuelle Testflächen bieten zur Erforschung verschiedener Probenahmedesigns oder statistischer Methoden sowie spezifischer Fragestellungen. Auch verwende ich diesen Ansatz, um systematisch zu untersuchen wie sich transiente Dynamiken sowie Arteigenschaften und ökologische Prozesse auf die Vorhersagegüte von Habitateignungsmodellen auswirken. So kann ich entscheidende Prozesse identifizieren welche in zukünftigen Modellen Berücksichtigung finden sollten. (ii) Darauf aufbauend koppele ich Vorhersagen von Habitateignungsmodellen mit einem individuen-basierten Populationsmodell, um die Entwicklung des Schweizer Birkhuhnbestandes unter Klimawandel vorherzusagen. Durch die explizite Berücksichtigung der Wechselwirkungen zwischen Habitat und demographischer Prozesse lassen sich direktere Aussagen über Populationsentwicklung und damit verbundener Extinktionsrisiken treffen. Allerdings führen verschiedene Simulationen auch zu hoher Variabilität zwischen Vorhersagen, was die Bedeutung von Sensitivitätsanalysen unterstreicht, um Unsicherheiten und Robustheit von Vorhersagen einzuschätzen. Außerdem identifiziere ich Restriktionen in der Datenabdeckung des Umweltraumes als möglichen Grund für kontrastierende Vorhersagen verschiedener Habitateignungsmodelle. Wenn die Nische einer Art nicht vollständig durch Daten beschrieben ist, kann dies zu unrealistischen Vorhersagen der Art-Habitat-Beziehung führen. Insgesamt trägt meine Arbeit erheblich bei zu einem besseren Verständnis der Auswirkung verschiedenster Unsicherheitsfaktoren auf Vorhersagen von Arealveränderungen und zeigt Wege auf, mit diesen umzugehen. Abschließend erstelle ich einen vorläufigen Leitfaden für Vorhersagemodelle und identifiziere Kernpunkte für weitere Forschung auf diesem Gebiet.
42

Künzner, Florian [Verfasser], Hans-Joachim [Akademischer Betreuer] Bungartz, Hans-Joachim [Gutachter] Bungartz e Ernst [Gutachter] Rank. "Efficient non-intrusive uncertainty quantification for large-scale simulation scenarios / Florian Künzner ; Gutachter: Hans-Joachim Bungartz, Ernst Rank ; Betreuer: Hans-Joachim Bungartz". München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1227580576/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Sierra, Gonzalez David. "Towards Human-Like Prediction and Decision-Making for Automated Vehicles in Highway Scenarios". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM012/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Au cours des dernières décennies, les constructeurs automobiles ont constamment introduit des innovations technologiques visant à rendre les véhicules plus sûrs. Le niveau de sophistication de ces systèmes avancés d’aide à la conduite s’est accru parallèlement aux progrès de la technologie des capteurs et de la puissance informatique intégrée. Plus récemment, une grande partie de la recherche effectuée par l'industrie et les institutions s'est concentrée sur l'obtention d'une conduite entièrement automatisée. Les avantages sociétaux potentiels de cette technologie sont nombreux, notamment des routes plus sûres, des flux de trafic améliorés et une mobilité accrue pour les personnes âgées et les handicapés. Toutefois, avant que les véhicules autonomes puissent être commercialisés, ils doivent pouvoir partager la route en toute sécurité avec d’autres véhicules conduits par des conducteurs humains. En d'autres termes, ils doivent pouvoir déduire l'état et les intentions du trafic environnant à partir des données brutes fournies par divers capteurs embarqués, et les utiliser afin de pouvoir prendre les bonnes décisions de conduite sécurisée. Malgré la complexité apparente de cette tâche, les conducteurs humains ont la capacité de prédire correctement l’évolution du trafic environnant dans la plupart des situations. Cette capacité de prédiction est rendu plus simple grâce aux règles imposées par le code de la route qui limitent le nombre d’hypothèses; elle repose aussi sur l’expérience du conducteur en matière d’évaluation et de réduction du risque. L'absence de cette capacité à comprendre naturellement une scène de trafic constitue peut-être, le principal défi qui freine le déploiement à grande échelle de véhicules véritablement autonomes sur les routes.Dans cette thèse, nous abordons les problèmes de modélisation du comportement du conducteur, d'inférence sur le comportement des autres véhicules, et de la prise de décision pour la navigation sûre. En premier lieu, nous modélisons automatiquement le comportement d'un conducteur générique à partir de données de conduite démontrées, évitant ainsi le réglage manuel traditionnel des paramètres du modèle. Ce modèle codant les préférences d’un conducteur par rapport au réseau routier (par exemple, voie ou vitesse préférées) et aux autres usagers de la route (par exemple, distance préférée au véhicule de devant). Deuxièmement, nous décrivons une méthode qui utilise le modèle appris pour prédire la séquence des actions à long terme de tout conducteur dans une scène de trafic. Cette méthode de prédiction suppose que tous les acteurs du trafic se comportent de manière aversive au risque, et donc ne peut pas prévoir les manœuvres dangereux ou les accidents. Pour pouvoir traiter de tels cas, nous proposons un modèle probabiliste plus sophistiqué, qui estime l'état et les intentions du trafic environnant en combinant la prédiction basée sur le modèle avec les preuves dynamiques fournies par les capteurs. Le modèle proposé imite ainsi en quelque sorte le processus de raisonnement des humains. Nous humains, savons ce qu’un véhicule est susceptible de faire compte tenu de la situation (ceci est donné par le modèle), mais nous surveillerons sa dynamique pour en détecter les écarts par rapport au comportement attendu. En pratique, la combinaison de ces deux sources d’informations se traduit par une robustesse accrue des estimations de l’intention par rapport aux approches reposant uniquement sur des preuves dynamiques. En dernière partie, les deux modèles présentés (comportemental et prédictif) sont intégrés dans le cadre d´une approche décisionnel probabiliste. Les méthodes proposées se sont vues évalués avec des données réelles collectées avec un véhicule instrumenté, attestant de leur efficacité dans le cadre de la conduite autonome sur autoroute. Bien que centré sur les autoroutes, ce travail pourrait être facilement adapté pour gérer des scénarios de trafic alternatifs
During the past few decades automakers have consistently introduced technological innovations aimed to make road vehicles safer. The level of sophistication of these advanced driver assistance systems has increased parallel to developments in sensor technology and embedded computing power. More recently, a lot of the research made both by industry and institutions has concentrated on achieving fully automated driving. The potential societal benefits of this technology are numerous, including safer roads, improved traffic flows, increased mobility for the elderly and the disabled, and optimized human productivity. However, before autonomous vehicles can be commercialized they should be able to safely share the road with human drivers. In other words, they should be capable of inferring the state and intentions of surrounding traffic from the raw data provided by a variety of onboard sensors, and to use this information to make safe navigation decisions. Moreover, in order to truly navigate safely they should also consider potential obstacles not observed by the sensors (such as occluded vehicles or pedestrians). Despite the apparent complexity of the task, humans are extremely good at predicting the development of traffic situations. After all, the actions of any traffic participant are constrained by the road network, by the traffic rules, and by a risk-aversive common sense. The lack of this ability to naturally understand a traffic scene constitutes perhaps the major challenge holding back the large-scale deployment of truly autonomous vehicles in the roads.In this thesis, we address the full pipeline from driver behavior modeling and inference to decision-making for navigation. In the first place, we model the behavior of a generic driver automatically from demonstrated driving data, avoiding thus the traditional hand-tuning of the model parameters. This model encodes the preferences of a driver with respect to the road network (e.g. preferred lane or speed) and also with respect to other road users (e.g. preferred distance to the leading vehicle). Secondly, we describe a method that exploits the learned model to predict the future sequence of actions of any driver in a traffic scene up to the distant future. This model-based prediction method assumes that all traffic participants behave in a risk-aware manner and can therefore fail to predict dangerous maneuvers or accidents. To be able to handle such cases, we propose a more sophisticated probabilistic model that estimates the state and intentions of surrounding traffic by combining the model-based prediction with the dynamic evidence provided by the sensors. In a way, the proposed model mimics the reasoning process of human drivers: we know what a given vehicle is likely to do given the situation (this is given by the model), but we closely monitor its dynamics to detect deviations from the expected behavior. In practice, combining both sources of information results in an increased robustness of the intention estimates in comparison with approaches relying only on dynamic evidence. Finally, the learned driver behavioral model and the prediction model are integrated within a probabilistic decision-making framework. The proposed methods are validated with real-world data collected with an instrumented vehicle. Although focused on highway environments, this work could be easily adapted to handle alternative traffic scenarios
44

Гушло, Юлія Юріївна. "Науково-методичні засади стратегічного управління фінансами банку в умовах невизначеності". Dissertazione di PhD, Сумський державний університет, 2021. https://essuir.sumdu.edu.ua/handle/123456789/83803.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Дисертаційна робота присвячена розв’язанню наукової проблеми, що полягає в удосконаленні теоретичних засад та науково-методичних підходів до стратегічного управління фінансів банку в умовах невизначеності. У роботі здійснено грунтовний аналіз понятійно-категоріального апарату стратегічного управління фінансами банку, у результаті чого запропоновано розглядати фінанси банку як сукупність зовнішніх та внутрішніх економічних відносин з приводу формування, розподілу та використання фінансових ресурсів банку, що, як очікується, приведуть до збільшення економічних вигід у майбутньому. На цій основі визначено, що об’єктами управління фінансами банку є відносини, що виникають у процедурі організації бізнес-процесів та операцій, формують та розподіляють фінансові ресурси, регулюють фінансові ризики та ліквідність, визначають фінансові результати, прибутковість та ефективність діяльності банку, тобто управління фінансами охоплює всю сукупність форм та методів організації фінансових відносин в банку. Обґрунтовано, що стратегічне управління фінансами банків здійснюється в умовах невизначеності, пов’язаній як з об’єктивними, так і суб’єктивними аспектами. Встановлено, що невизначеність, пов’язана з об’єктивними аспектами, зумовлена впливом факторів мікрорівня (невизначеність в економічних відносинах у сфері фінансів банку), макрорівня (невизначеність політичного, макроекономічного, соціального та технологічного характерів) та мегарівнів (невизначеність, спричинена геополітичними кризами, змінами монетарного та банківського регулювання, економічними процесами у світовій економічній та фінансовій системах, турбулентністю на світовому фінансовому ринку тощо). Доведено, що невизначеності внутрішнього походження мають суб’єктивний характер, є наслідком ефективності внутрішньобанківських аспектів управління і включають: невизначеність цільового блоку (невизначеність цілей, невизначеність критеріїв), невизначеність прийняття рішення (структурна невизначеність, невизначеність вибору, невизначеність наслідків прийнятих рішень). З’ясовано, що основу стратегічного управління фінансами банків в умовах невизначеності формує управління величиною (обсягами), збалансованістю та стабільністю фінансових ресурсів, а ефективність цього управління має розглядатись як цільова оптимізована величина параметрів прибутковості при обмеженні рівня фінансових ризиків та забезпеченні необхідного рівня ліквідності. При цьому між результативними показниками управління є тісний взаємозв’язок, оскільки ліквідність визначається якістю та стабільністю активів та пасивів, які, своєю чергою, генерують ризики, що, за умови значного їх підвищення, можуть негативно вплинути на ліквідність. У роботі розроблено науково-методичний підхід до визначення впливу індикаторів невизначеності на фундаментальні показники функціонування банків, що передбачає комплексне використання статистичних тестів, властивостей закону нормального розподілу та множинного регресійного аналізу. На основі його застосування емпірично встановлено, що такі індикатори рівня невизначеності, як загальний індекс невизначеності та індекс геополітичної невизначеності в Україні здійснюють статистично значущий вплив (з різними рівнями значущості) на фундаментальні показники діяльності банків (достатність капіталу, ефективність, зміни активів та пасивів). Розроблений підхід значно розширює аналітичний потенціал щодо оцінки функціонування банків в умовах невизначеності операційного середовища. Стратегічне управління фінансами банку запропоновано трактувати як динамічно-адаптивну систему, взаємозв’язки та взаємодія субсистем якої забезпечують цілеспрямований багаторівневий вплив на процес формування та подальшого регулювання параметрів об’єктів управління для досягнення поставлених цілей у межах заданих величин ризиків з урахуванням впливу недетермінованого середовища. Визначено, що за компонентним складом система стратегічного управління фінансами банку являє ієрархічну сукупність цільової, функціональної, організаційно-структурної субсистем, а також способів їх взаємодії, інтеграції та дезінтеграції на основі сукупності принципів, що забезпечують досягнення стратегій та цілей управління у межах заданих величин ризиків, з урахуванням впливу недетермінованого середовища. У роботі доведено, що з огляду на особливості впливу невизначеності, стратегічне управління фінансами банку має реалізуватись на основі динамічно-адаптивної моделі, що передбачає класичний склад компонентів відповідно до системного підходу та послідовності реалізації відповідно до процесного підходу, однак кожна його функція, метод та інструмент набувають специфічного наповнення для досягнення поставлених цілей та реалізуються через функціонально-адаптивне планування, функціонально-адаптивне діагностування та функціонально-адаптивний аналіз й моніторинг. Для зниження невизначеності цільового блоку в аспектах невизначеності цілей управління та критеріїв їх досягнення, в роботі сформований комплекс заходів щодо вдосконалення методичного забезпечення стратегічного планування фінансів банку. Удосконалено та запропоновано до практичного використання науково-методичний підхід до стратегічного сценарного планування фінансів банку, який на відміну від наявних враховує не лише наявність основних компонентів, а й включає процедури сценарного аналізу та розробки сценаріїв. Його запровадження в діяльність банків України дозволяє зменшити рівень негативного впливу невизначеності та підвищить їх адаптованість до недетермінованих умов операційного середовища. У роботі розроблено науково-методичний підхід до аналізу якості стратегічних фінансових планів банку. Аналіз якості стратегічних фінансових планів банку запропоновано визначати як систему комплексного вивчення та аналізу якості їх розробки та реалізації, результатом чого є формалізована та неформалізована оцінка того, в якій мірі банк протягом планового періоду буде здатним досягти визначених цільових таргетів. Відповідно до цього, може бути визначено необхідність внесення змін або коригування цільових параметрів стратегічних фінансових планів банку. Система комплексного вивчення та аналізу якості розробки та реалізації стратегічних фінансових планів банку має включати: визначення мети та завдань проведення оцінки якості на кожному етапі; сукупність кількісних та якісних показників оцінки впливу факторів на якість стратегічних фінансових планів банку; інструментарій та методи їх оцінювання, що дають змогу їх дослідити; технологію інтерпретації отриманих за результатами оцінювання даних та механізми вжиття необхідних управлінських рішень, що дозволять підвищити якість стратегічного фінансового планування в банку. У роботі досліджено та удосконалено методичний інструментарій моделювання та прогнозування прибутковості як цільової стратегії стратегічного управління фінансами банку для забезпечення його життєздатності. Він полягає у комплексному та багатоетапному аналізі з використанням статистичних тестів та формування на цій основі мультирегресійних рівнянь з необхідними показниками статистичної значущості, що базуються на врахуванні внутрішніх факторів впливу на фінансові результати банків та є аналітичною основою при трансформації стратегічного управління фінансами банку в умовах невизначеності. Запропонований науково-методичний підхід дає змогу зробити прогнозні висновки щодо зміни рівня прибутковості, виявити слабкі місця, в тому числі шляхом порівняння з "peer-group" визначених банком напрямів розміщення коштів та джерел фінансування. Це сформує аналітичне підгрунтя для розробки комплексу рекомендацій щодо підвищення ефективності стратегічного управління фінансами банку в умовах невизначеності, зокрема на основі коригування стратегії, бізнес-моделі та бізнес-плану. Результати статистичних тестів та мультирегресійного аналізу за сформованою вибіркою банків України дозволили зробити висновок про негативний вплив невизначеності на рівень їх життєздатності через постійне зниження показників їх прибутковості внаслідок вищих темпів приросту витрат порівняно з доходами. Для банків з консервативнішими бізнес-моделями зниження життєздатності буде незначним. З огляду на ідентифікацію негативних тенденцій, що відбивають ранні, початкові ознаки появи загрози зниження рівня життєздатності банків, суб’єкти стратегічного управління фінансами банку мають запровадити відповідні превентивні управлінські впливи, що мають адаптувати діяльність банку до функціонування в умовах зростання рівня невизначеності. У роботі розроблено методичний підхід до формування оптимально-збалансованої структури активів та пасивів банку на основі їх оптимізації з використанням нелінійного зменшеного градієнта GRG. Він передбачає послідовну реалізацію наступних етапів: формування інформаційного забезпечення; розрахунок періоду формування оптимально-збалансованої структури активів та пасивів (горизонту моделювання); математичний опис цільової функції та умов-обмежень; визначення верхніх та нижніх меж змін залишків за активами та пасивами; перевірку інформаційного забезпечення та математичної моделі; розрахунок оптимально-збалансованої структури активів та пасивів на основі застосування нелінійного алгоритму зменшеного градієнта GRG; аналіз результатів оптимізації на основі аналізу чутливості та забезпечення підтримання оптимально-збалансованої структури активів та пасивів банку. Результатом розрахунку є оптимально-збалансована структура активів та пасивів у межах індивідуально визначеного горизонту планування. Запропонований підхід інтегрує можливості математичного апарату опису економічних процесів з принципами стратегічного управління фінансами банку та вимогами НБУ щодо забезпечення стійкості та життєздатності бізнес-моделі. Його принциповою особливістю є здатність до модифікації в частині набору функціональних обмежень залежно від індивідуальних особливостей діяльності конкретного банку, а також здатність адаптації до реальних можливостей банку в частині меж змін сумарних залишків груп активів та пасивів. За результатами апробації на основі даних АТ "Райффайзен Банк Аваль" емпірично доведено, що банку для підвищення ефективності, більшого рівня фінансової стійкості, включаючи капіталізацію та ліквідність, необхідно переглянути структуру активів та зобов’язань з переорієнтацією на роботу з фізичними особами. У довгостроковій перспективі це виявляється кращою стратегією як у нормальних умовах функціонування, так і з точки зору протистояння умовам невизначеності та нестабільності. За умови змін у монетарній політиці НБУ щодо розміру облікової ставки банк отримає більше переваг при її збільшенні через подальше зростання рівня фінансових результатів та показника ROA. Основні положення дисертації приведено до рівня методичних розробок і практичних рекомендацій, що можна застосовувати банківськими установами в процесі прийняття управлінських рішень щодо формування та використання фінансових ресурсів банку в умовах невизначеності.
The dissertation is devoted to solving a scientific problem, which is to improve the theoretical foundations and scientific and methodological approaches to the strategic management of bank finances in conditions of uncertainty. The dissertation provides a thorough analysis of the bank's conceptual and categorical apparatus of strategic financial management. As a result, it is proposed to consider the bank's finances as a set of external and internal economic relations regarding the formation, distribution, and use of the bank's financial resources, which is expected to increase economic benefits in the future. On this basis, it is determined that the objects of financial management of the bank are the relationships that arise in the organization of business processes and operations, form and allocate financial resources, regulate financial risks and liquidity, determine financial results, profitability, and efficiency of the bank. Bank finance management covers the entire set of forms and methods of organizing economic relations in the bank. It is substantiated that banks' strategic management is carried out in conditions of uncertainty related to both objective and subjective aspects. It is established that the uncertainty is related to the objective aspects. They are due to the influence of micro-level factors (uncertainty in economic relations in the field of bank finance), macro-level (uncertainty of political, macroeconomic, social, and technological nature) and mega-levels (uncertainty caused by geopolitical crises, changes in monetary and banking regulation, economic processes in the global economic and financial systems, turbulence in the global financial market, etc.). It is proved that uncertainties of internal origin are subjective. They are a consequence of the effectiveness of internal banking aspects of management. They include uncertainty of the goal setting (uncertainty of goals, the uncertainty of criteria), the uncertainty of decision-making (structural uncertainty, uncertainty of choice, uncertainty of consequences). It was found that the basis of strategic management of bank finances in conditions of uncertainty is the management of size (volume), balance and stability of financial resources. The effectiveness of strategic management of banks should be considered a target optimized value of profitability parameters while limiting financial risks and providing the required level of liquidity. At the same time, there is a close relationship between key performance indicators, as liquidity is determined by the quality and stability of assets and liabilities, which, in turn, generate risks that, if significantly increased, can adversely affect liquidity. The dissertation develops a scientific and methodological approach to determining the impact of uncertainty indicators on the fundamental performance of banks. It involves the integrated use of statistical tests, properties of the law of normal distribution, and multiple regression analysis. Based on its application, it is empirically established that such indicators of uncertainty as to the World Uncertainty Index in Ukraine and the Geopolitical Risk Index in Ukraine have a statistically significant impact (with different levels of significance) on the fundamental performance of banks (capital adequacy, efficiency, assets and liabilities). The developed approach significantly expands the analytical potential for assessing the functioning of banks in conditions of uncertainty of the operating environment. The bank's strategic financial management is proposed to be interpreted as a dynamic-adaptive system, interconnections and interaction of subsystems provide purposeful multilevel influence on the formation and further regulation of parameters of management objects to achieve goals within the given values of risks account the indeterminate environment. It is determined that the component composition of the strategic financial management system of the bank is a hierarchical set of target, functional, organizational and structural subsystems, as well as ways of their interaction, integration and disintegration based on a set of principles to achieve management goals within specified risk values, taking into account exposure to the nondeterministic environment. The dissertation proves that given the peculiarities of the impact of uncertainty, strategic financial management of the bank should be implemented based on the dynamic-adaptive model, which provides a classic composition of components according to the system approach and implementation sequence according to the process approach. Still, each of its functions, methods, and tools acquire specific content to achieve the goals and are implemented through functional-adaptive planning, functional-adaptive diagnosis, and functional-adaptive analysis and monitoring. To reduce the uncertainty of management objectives and criteria for their achievement, the work formed a set of measures to improve the methodological support of strategic planning of bank finances. The scientific and methodological approach to the strategic scenario planning of the bank's finances has been improved and proposed for practical use. Its introduction in the activity of Ukrainian banks allows reducing the level of the negative impact of uncertainty and increasing their adaptability to non-deterministic conditions of the operating environment. The dissertation develops a scientific and methodological approach to analyzing the quality of the bank's strategic financial plans. The analysis of the quality of the bank's strategic financial plans is proposed to be defined as a system of comprehensive study and research of the quality of their development and implementation, resulting in a formalized and informal assessment of the extent to which the bank will be able to achieve specific targets. Accordingly, it may be necessary to amend or adjust the target parameters of the bank's strategic financial plans. The system of comprehensive study and analysis of the quality of development and implementation of strategic financial plans of the bank should include: defining the purpose and objectives of quality assessment at each stage; a set of quantitative and qualitative indicators for assessing the impact of factors on the quality of the bank's strategic financial plans; tools and methods of their evaluation, which allow to investigate them; technology of interpretation of the data obtained as a result of assessment and mechanisms for making the necessary management decisions that will improve the quality of strategic financial planning in the bank. The methodological tools of modeling and forecasting profitability as a targeted strategy of strategic financial management of the bank to ensure its viability are studied and improved in the work. It consists of a comprehensive and multi-stage analysis using statistical tests and the formation on this basis of multi-regression equations with the necessary indicators of statistical significance, based on internal factors influencing banks' financial results and is an analytical basis for transforming strategic financial management of the bank in uncertainty. The proposed scientific and methodological approach allows making predictive conclusions about changes in profitability and identifying weaknesses, including by comparing with the "peer-group" identified by the bank areas of placement of funds and sources of funding. It will form an analytical basis for developing a set of recommendations for improving the effectiveness of strategic management of the bank's finances in conditions of uncertainty, mainly through the adjustment of strategy, business model, and business plan. The results of statistical tests and multi-regression analysis of the sample of Ukrainian banks allowed us to conclude that uncertainty hurts their viability due to the constant decline in their profitability due to higher growth rates of costs compared to income. For banks with more conservative business models, the decrease in viability will be insignificant. Given the identification of negative trends that reflect the early, initial signs of the threat of declining viability of banks, the bank's strategic financial management should introduce appropriate preventive management influences to adapt the bank to operate in conditions of growing uncertainty. The systematic approach to forming an optimally balanced structure of assets and liabilities of the bank based on their optimization using nonlinear reduced gradient GRG is developed in work. It provides for the consistent implementation of the following stages: the formation of information support; calculation of the period of construction of the optimally balanced structure of assets and liabilities (modeling horizon); mathematical description of the objective function and conditions-constraints; determination of upper and lower limits of changes in balances on assets and liabilities; verification of information support and mathematical model; calculation of the optimally balanced structure of assets and liabilities based on the use of nonlinear algorithm of reduced GRG gradient; analysis of optimization results based on sensitivity analysis and ensuring the maintenance of an optimally balanced structure of the bank's assets and liabilities. The calculation result is an optimally balanced structure of assets and liabilities within an individually defined planning horizon. The proposed approach integrates the capabilities of the mathematical apparatus of the description of economic processes with the principles of strategic management of bank finances and the requirements of the NBU to ensure the stability and viability of the business model. Its main feature is the ability to modify the set of functional constraints depending on the individual characteristics of a particular bank and the ability to adapt to the real capabilities of the bank in terms of changes in the total balances of groups of assets and liabilities. Based on the results of approbation based on the data of Raiffeisen Bank Aval JSC, it is empirically proved that the bank needs to reconsider the structure of assets and liabilities with reorientation to work with individuals to increase efficiency, the greater level of financial stability, including capitalization and liquidity. In the end, this is a better strategy both under normal operating conditions and in terms of confronting conditions of uncertainty and instability. Subject to changes in the monetary policy of the NBU regarding the size of the discount rate, the bank will receive more benefits when it increases due to a further increase in the level of efficiency and ROA. The main provisions of the dissertation are reduced to the level of methodological developments and practical recommendations that can be applied by banking institutions in the process of making management decisions on the formation and use of financial resources of the bank in conditions of uncertainty.
45

Fluixá, Sanmartín Javier. "Adaptation strategies of dam safety management to new climate change scenarios informed by risk indicators". Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/157634.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
[ES] Las grandes presas, así como los diques de protección, son infraestructuras críticas cuyo fallo puede conllevar importantes consecuencias económicas y sociales. Tradicionalmente, la gestión del riesgo y la definición de estrategias de adaptación en la toma de decisiones han asumido la invariabilidad de las condiciones climáticas, incluida la persistencia de patrones históricos de variabilidad natural y la frecuencia de eventos extremos. Sin embargo, se espera que el cambio climático afecte de forma importante a los sistemas hídricos y comprometa la seguridad de las presas, lo que puede acarrear posibles impactos negativos en términos de costes económicos, sociales y ambientales. Los propietarios y operadores de presas deben por tanto adaptar sus estrategias de gestión y adaptación a medio y largo plazo a los nuevos escenarios climáticos. En la presente tesis se ha desarrollado una metodología integral para incorporar los impactos del cambio climático en la gestión de la seguridad de presas y en el apoyo a la toma de decisiones. El objetivo es plantear estrategias de adaptación que incorporen la variabilidad de los futuros riesgos, así como la incertidumbre asociada a los nuevos escenarios climáticos. El impacto del cambio climático en la seguridad de presas se ha estructurado utilizando modelos de riesgo y mediante una revisión bibliográfica interdisciplinaria sobre sus potenciales efectos. Esto ha permitido establecer un enfoque dependiente del tiempo que incorpore la evolución futura del riesgo, para lo cual se ha definido un nuevo indicador que evalúa cuantitativamente la eficiencia a largo plazo de las medidas de reducción de riesgo. Además, para integrar la incertidumbre de los escenarios futuros en la toma de decisiones, la metodología propone una estrategia robusta que permite establecer secuencias optimizadas de implementación de medidas correctoras para la adaptación al cambio climático. A pesar de las dificultades para asignar probabilidades a eventos específicos, esta metodología permite un análisis sistemático y objetivo, reduciendo considerablemente la subjetividad. Esta metodología se ha aplicado al caso real de una presa española susceptible a los efectos del cambio climático. El análisis se centra en el escenario hidrológico, donde las avenidas son la principal carga a la que está sometida la presa. Respecto de análisis previos de la presa, los resultados obtenidos proporcionan nueva y valiosa información sobre la evolución de los riesgos futuros y sobre cómo abordarlos. En general, se espera un aumento del riesgo con el tiempo; esto ha llevado a plantear nuevas medidas de adaptación que no están justificadas en la situación actual. Esta es la primera aplicación documentada de un análisis exhaustivo de los impactos del cambio climático sobre el riesgo de rotura de una presa que sirve como marco de referencia para la definición de estrategias de adaptación a largo plazo y la evaluación de su eficiencia.
[CAT] Les grans preses, així com els dics de protecció, són infraestructures crítiques que si fallen poden produir importants conseqüències econòmiques i socials. Tradicionalment, la gestió del risc i la definició d'estratègies d'adaptació en la presa de decisions han assumit la invariabilitat de les condicions climàtiques, inclosa la persistència de patrons històrics de variabilitat natural i la probabilitat d'esdeveniments extrems. No obstant això, s'espera que el canvi climàtic afecte de manera important als sistemes hídrics i comprometi la seguretat de les preses, la qual cosa pot implicar possibles impactes negatius en termes de costos econòmics, socials i ambientals. Els propietaris i operadors de preses deuen per tant adaptar les seues estratègies de gestió i adaptació a mitjà i llarg termini als nous escenaris climàtics. En la present tesi s'ha desenvolupat una metodologia integral per a incorporar els impactes del canvi climàtic en la gestió de la seguretat de preses i en el suport a la presa de decisions. L'objectiu és plantejar estratègies d'adaptació que incorporen la variabilitat dels futurs riscos, així com la incertesa associada als nous escenaris climàtics. L'impacte del canvi climàtic en la seguretat de preses s'ha estructurat utilitzant models de risc i mitjançant una revisió bibliogràfica interdisciplinària sobre els seus potencials efectes. Això ha permès establir un enfocament dependent del temps que incorpori l'evolució futura del risc, per a això s'ha definit un nou indicador que avalua quantitativament l'eficiència a llarg termini de les mesures de reducció de risc. A més, per a integrar la incertesa dels escenaris futurs en la presa de decisions, la metodologia proposa una estratègia robusta que permet establir seqüències optimitzades d'implementació de mesures correctores per a l'adaptació al canvi climàtic. A pesar de les dificultats per a assignar probabilitats a esdeveniments específics, esta metodologia permet una anàlisi sistemàtica i objectiva, reduint considerablement la subjectivitat. Aquesta metodologia s'ha aplicat al cas real d'una presa espanyola susceptible a l'efecte del canvi climàtic. L'anàlisi se centra en l'escenari hidrològic, on les avingudes són la principal càrrega a la qual està sotmesa la presa. Respecte d'anàlisis prèvies de la presa, els resultats obtinguts proporcionen nova i valuosa informació sobre l'evolució dels riscos futurs i sobre com abordar-los. En general, s'espera un augment del risc amb el temps; això ha portat a plantejar noves mesures d'adaptació que no estarien justificades en la situació actual. Aquesta és la primera aplicació documentada d'una anàlisi exhaustiva dels impactes del canvi climàtic sobre el risc de trencament d'una presa que serveix com a marc de referència per a la definició d'estratègies d'adaptació a llarg termini i l'avaluació de la seua eficiencia.
[EN] Large dams as well as protective dikes and levees are critical infrastructures whose failure has major economic and social consequences. Risk assessment approaches and decision-making strategies have traditionally assumed the stationarity of climatic conditions, including the persistence of historical patterns of natural variability and the likelihood of extreme events. However, climate change has a major impact on the world's water systems and is endangering dam safety, leading to potentially damaging impacts in terms of economic, social and environmental costs. Owners and operators of dams must adapt their mid- and long-term management and adaptation strategies to new climate scenarios. This thesis proposes a comprehensive approach to incorporate climate change impacts on dam safety management and decision-making support. The goal is to design adaptation strategies that incorporate the non-stationarity of future risks as well as the uncertainties associated with new climate scenarios. Based on an interdisciplinary review of the state-of-the-art research on its potential effects, the global impact of climate change on dam safety is structured using risk models. This allows a time-dependent approach to be established to consider the potential evolution of risk with time. Consequently, a new indicator is defined to support the quantitative assessment of the long-term efficiency of risk reduction measures. Additionally, in order to integrate the uncertainty of future scenarios, the approach is enhanced with a robust decision-making strategy that helps establish the consensus sequence of measures to be implemented for climate change adaptation. Despite the difficulties to allocate probabilities to specific events, such framework allows for a systematic and objective analysis, reducing considerably the subjectivity. Such a methodology is applied to a real case study of a Spanish dam subjected to the effects of climate change. The analysis focus on hydrological scenarios, where floods are the main load to which the dam is subjected. The results provide valuable new information with respect to the previously existing analysis of the dam regarding the evolution of future risks and how to cope with it. In general, risks are expected to increase with time and, as a result, new adaptation measures that are not justifiable for the present situation are recommended. This is the first documented application of a comprehensive analysis of climate change impacts on dam failure risk and serves as a reference benchmark for the definition of long-term adaptation strategies and the evaluation of their efficiency.
Fluixá Sanmartín, J. (2020). Adaptation strategies of dam safety management to new climate change scenarios informed by risk indicators [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/157634
TESIS
46

Hudec, Martin. "Klimatická změna a její vliv na vodohospodářské řešení zásobního objemu nádrže". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-372277.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The diploma thesis describes Climate Change and impacts of Climate Change on the development of the water management analyis of reservoir strorage capacity. The development of climate chang influence on reserviors storage capacity is presented until 2100. It also gives a detailed online downscaling description.
47

Ostberg, Sebastian. "Joint impacts of climate and land use change on the terrestrial biosphere". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19319.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Es gibt zwei Hauptpfade, über die der Mensch die terrestrische Biosphäre verändert: 1) direkt durch Landnutzungswandel (LNW) und 2) indirekt durch Klimawandel (KW), welcher seinerseits zu Ökosystemveränderungen führt. Die vorliegende Dissertation unternimmt den Versuch, die vom Menschen über beide diese Pfade verursachten Veränderungen konsistent und quantitativ zu bestimmen. Die Analyse basiert auf einem integrierten Indikator für makro-skalige Veränderungen der biogeochemikalischen Eigenschaften und der Ökosystemstruktur. Große Verschiebungen bei diesen grundlegenden Bausteinen der Biosphäre bedeuten ein Risiko für komplexere Ökosystemeigenschaften, da sie möglicherweise lange bestehende biotische Interaktionen unterbrechen. Die Arbeit stützt sich auf Simulationen mit dem dynamischen globalen Vegetations-, Agrar- und Hydrologiemodell LPJmL, um zu bestimmen, wie biogeochemische Eigenschaften und die Ökosystemstruktur auf historischen LNW und KW reagiert haben. Für die Zukunftsprojektionen wird LPJmL mit einer großen Anzahl an Klima- und Landnutzungsszenarien angetrieben. Laut den Simulationsergebnissen haben sich schwere Ökosystemveränderungen durch LNW und KW von lediglich 0,5% um 1700 auf 25-31% der Landoberfläche heute ausgedehnt. Landnutzung war in der Vergangenheit der wichtigste anthropogene Treiber schwerer Ökosystemveränderungen. Für das 21. Jahrhundert zeigen die Ergebnisse, dass KW voraussichtlich in allen außer den ambitioniertesten Mitigationsszenarien den Platz als Haupttreiber schwerer Ökosystemveränderungen übernehmen wird. Einige Landnutzungsszenarien nehmen an, dass zukünftige Effizienzsteigerungen trotz Bevölkerungswachtum eine Verringerung der landwirtschaftlichen Fläche ermöglichen. Doch auch verminderte LNW-Auswirkungen werden wahrscheinlich nicht ausreichen, um die Zunahme von Klimafolgen zu kompensieren, so dass die vom Menschen verursachte Transformation der Biosphäre in diesem Jahrhundert wahrscheinlich unabhängig vom Szenario wachsen wird.
There are two major pathways of human interference with the terrestrial biosphere: 1) directly through land use change (LUC) and 2) indirectly through anthropogenic climate change (CC) which in turn drives ecosystem change. This dissertation presents an attempt to assess human-induced biosphere change through both these pathways in a consistent and quantitative way. The analysis is based on an integrated indicator of macro-scale changes in biogeochemical characteristics and ecosystem structure. Large shifts in these basic building blocks of the biosphere are taken to indicate a risk to more complex ecosystem properties as they potentially disrupt long-standing biotic interactions. This dissertation relies on simulations with the dynamic global vegetation, agriculture and hydrology model LPJmL to quantify how biogeochemical characteristics and ecosystem structure have responded to historical LUC and CC. For future projections LPJmL is driven by a large number of CC and LUC scenarios, using the same indicator to measure the impact on the biosphere. Simulation results show that major impacts on the biosphere from CC and LUC have expanded from merely 0.5% of the land surface in 1700 to 25-31% of the land surface today. Land use has been the main anthropogenic driver causing major ecosystem change in the past. For the future, results show that CC is expected to take over as the main anthropogenic driver of major ecosystem change during this century in all but the most ambitious climate mitigation scenarios. Despite a growing world population, some land use scenarios project that future efficiency improvements will allow for a reduction of agricultural land and hence a reduction of the impact of LUC on the terrestrial biosphere. Yet, results also show that reduced LUC impacts will likely not be able to compensate for the increase in CC impacts, and human-induced transformation of the biosphere is likely to grow during this century regardless of the considered scenario.
48

Nosjean, Nicolas. "Management et intégration des risques et incertitudes pour le calcul de volumes de roches et de fluides au sein d’un réservoir, zoom sur quelques techniques clés d’exploration Integrated Post-stack Acoustic Inversion Case Study to Enhance Geological Model Description of Upper Ordovicien Statics : from imaging to interpretation pitfalls and an efficient way to overcome them Improving Upper Ordovician reservoir characterization - an Algerian case study Tracking Fracture Corridors in Tight Gas Reservoirs : An Algerian Case Study Integrated sedimentological case study of glacial Ordovician reservoirs in the Illizi Basin, Algeria A Case Study of a New Time-Depth Conversion Workflow Designed for Optimizing Recovery Proper Systemic Knowledge of Reservoir Volume Uncertainties in Depth Conversion Integration of Fault Location Uncertainty in Time to Depth Conversion Emergence of edge scenarios in uncertainty studies for reservoir trap analysis Enhancing geological model with the use of Spectral Decomposition - A case study of a prolific stratigraphic play in North Viking Graben, Norway Fracture corridor identification through 3D multifocusing to improve well deliverability, an Algerian tight reservoir case study Geological Probability Of Success Assessment for Amplitude-Driven Prospects, A Nile Delta Case Study". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS085.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
En tant que géoscientifique dans le domaine de l’Exploration pétrolière et gazière depuis une vingtaine d’années, mes fonctions professionnelles m’ont permis d’effectuer différents travaux de recherche sur la thématique de la gestion des risques et des incertitudes. Ces travaux de recherche se situent sur l’ensemble de la chaîne d’analyse Exploration, traitant de problématiques liées à l’acquisition et au traitement sismique, jusqu’au placement optimal de forages d’exploration. Un volet plus poussé de mes travaux s’est orienté sur la gestion des incertitudes géophysiques en Exploration pétrolière, là où l’incertitude est la plus importante et paradoxalement la moins travaillée.On peut regrouper mes travaux de recherche en trois grands domaines qui suivent les grandes étapes du processus Exploration : le traitement sismique, leur interprétation, et enfin l'analyse et l'extraction des différentes incertitudes qui vont nous permettre de calculer les volumes d’hydrocarbures en place et récupérables, ainsi que l’analyse de ses risques associés. L’ensemble des travaux de recherche ont été appliqués avec succès sur des cas d’études opérationnelles. Après avoir introduit quelques notions générales et détaillé les grandes étapes du processus Exploration et leur lien direct avec ces problématiques, je présenterai quatre grands projets de recherche sur un cas d’étude algérien
In the last 20 years, I have been conducting various research projects focused on the management of risks and uncertainties in the petroleum exploration domain. The various research projects detailed in this thesis are dealing with problematics located throughout the whole Exploration and Production chain, from seismic acquisition and processing, until the optimal exploration to development wells placement. Focus is made on geophysical risks and uncertainties, where these problematics are the most pronounced and paradoxically the less worked in the industry. We can subdivide my research projects into tree main axes, which are following the hydrocarbon exploration process, namely: seismic processing, seismic interpretation thanks to the integration with various well informations, and eventually the analysis and extraction of key uncertainties, which will be the basis for the optimal calculation of in place and recoverable volumes, in addition to the associated risk analysis on a given target structure. The various research projects that are detailed in this thesis have been applied successfully on operational North Africa and North Sea projects. After introducing risks and uncertainty notions, we will detail the exploration process and the key links with these issues. I will then present four major research projects with their theoretical aspects and applied case study on an Algerian asset
49

Vestin, Albin, e Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
50

Shaw, Michael Patrick. "The use of scenario planning for managing environmental uncertainty". Thesis, 2003. http://hdl.handle.net/10413/4221.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
There were two main objectives for this research. The first objective was to understand how organisations think strategically and formulate strategy for the current and future environments in which they operate, and the second objective was to determine what the organisations were doing to manage complexity and uncertainty in these environments. This necessitated a review of "traditional" or "rationalist" strategy, the "resource based view" of strategy, and if and how organisations use scenario planning as a means to reduce environmental uncertainty, develop strategic options, improve the quality of strategic decisions, and facilitate organisational learning. The methodology followed for the research was qualitative in nature and involved a literature review and three case studies of organisations in two industries. Primary data was acquired from semi structured interviews and workshops, and secondary data came from annual reports, analysts reports, books, journals and periodicals, and documents made available by the subjects of the study. The workshops were also used to confirm the veracity of data and explore emerging information, themes and concepts. The research led to the development of a framework for the analysis of strategy formulation in organisations, and it surfaced three predominant themes: 1. The strategy process and strategic response of organisations will develop in accordance with its market dynamics and environmental drivers. The primary drivers shaping the strategy process and strategic response is the nature of demand in the market and the market context. Secondary drivers are the political, economic, social, technological and regulatory environments. 2. The market context is determined by the industry structure, which can be monopolistic, oligopolistic or open and competitive, and the profile and characteristics of the competition. 3. There are three organisational determinants of the strategic response: the political and cultural systems metaphors, the mental models that develop as a result of these systems, and the type and nature of individual and organisational leaming.
Thesis (M.Sc.)-University of Natal, Durban, 2003.

Vai alla bibliografia