To see the other types of publications on this topic, follow the link: Calibration of climate model.

Dissertations / Theses on the topic 'Calibration of climate model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Calibration of climate model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Raoult, Nina. "Calibration of plant functional type parameters using the adJULES system." Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/29837.

Full text
Abstract:
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This thesis describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary productivity (GPP) and latent heat (LE) fluxes. The adJULES system is extended to have the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85% of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter. The results of the calibrations are compared to structural changes and used in a cluster analysis in order to challenge the PFT definitions in JULES. This thesis concludes with simple sensitivity studies which assess how the calibration of JULES has affected the sensitivity of the model to CO2-induced climate change.
APA, Harvard, Vancouver, ISO, and other styles
2

Niraula, Rewati. "Understanding the Hydrological Response of Changed Environmental Boundary Conditions in Semi-Arid Regions: Role of Model Choice and Model Calibration." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/594961.

Full text
Abstract:
Arid and semi-arid basins in the Western United States (US) have been significantly impacted by human alterations to the water cycle and are among the most susceptible to water stress from urbanization and climate change. The climate of the Western US is projected to change in response to rising greenhouse gas concentrations. Combined with land use/land cover (LULC) change, it can influence both surface and groundwater resources, both of which are a significant source of water in the US. Responding to this challenge requires an improved understanding of how we are vulnerable and the development of strategies for managing future risk. In this dissertation, I explored how hydrology of semi-arid regions responds to LULC and climate change and how hydrologic projections are influenced by the choice and calibration of models. The three main questions I addressed with this dissertation are: 1. Is it important to calibrate models for forecasting absolute/relative changes in streamflow from LULC and climate changes? 2. Do LSMs make reasonable estimates of groundwater recharge in the western US? 3. How might recharge change under projected climate change in the western US? Results from this study suggested that it is important to calibrate the model spatially to analyze the effect of LULC change but not as important for analyzing the relative change in streamflow due to climate change. Our results also highlighted that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating current and future recharge in data limited regions. Average annual recharge is projected to increase in about 62% of the region and decrease in about 38% of the western US in future and varies significantly based on location (-50% - +94 for near future and -90% to >100% for far future). Recharge is expected to decrease significantly (-13%) in the South region in the far future. The Northern Rockies region is expected to get more recharge in both in the near (+5.1%) and far (+9.0%) future. Overall, this study suggested that land use/land cover (LULC) change and climate change significantly impacts hydrology in semi-arid regions. Model choice and model calibrations also influence the hydrological predictions. Hydrological projections from models have associated uncertainty, but still provide valuable information for water managers with long term water management planning.
APA, Harvard, Vancouver, ISO, and other styles
3

Davies, Nicholas William. "The climate impacts of atmospheric aerosols using in-situ measurements, satellite retrievals and global climate model simulations." Thesis, University of Exeter, 2018. http://hdl.handle.net/10871/34544.

Full text
Abstract:
Aerosols contribute the largest uncertainty to estimates of radiative forcing of the Earth’s atmosphere, which are thought to exert a net negative radiative forcing, offsetting a potentially significant but poorly constrained fraction of the positive radiative forcing associated with greenhouse gases. Aerosols perturb the Earth’s radiative balance directly by absorbing and scattering radiation and indirectly by acting as cloud condensation nuclei, altering cloud albedo and potentially cloud lifetime. One of the major factors governing the uncertainty in estimates of aerosol direct radiative forcing is the poorly constrained aerosol single scattering albedo, which is the ratio of the aerosol scattering to extinction. In this thesis, I describe a new instrument for the measurement of aerosol optical properties using photoacoustic and cavity ring-down spectroscopy. Characterisation is performed by assessing the instrument minimum sensitivity and accuracy as well as verifying the accuracy of its calibration procedure. The instrument and calibration accuracies are assessed by comparing modelled to measured optical properties of well-characterised laboratory-generated aerosol. I then examine biases in traditional, filter-based absorption measurements by comparing to photoacoustic spectrometer absorption measurements for a range of aerosol sources at multiple wavelengths. Filter-based measurements consistently overestimate absorption although the bias magnitude is strongly source-dependent. Biases are consistently lowest when an advanced correction scheme is applied, irrespective of wavelength or aerosol source. Lastly, I assess the sensitivity of the direct radiative effect of biomass burning aerosols to aerosol and cloud optical properties over the Southeast Atlantic Ocean using a combination of offline radiative transfer modelling, satellite observations and global climate model simulations. Although the direct radiative effect depends on aerosol and cloud optical properties in a non-linear way, it appears to be only weakly dependent on sub-grid variability.
APA, Harvard, Vancouver, ISO, and other styles
4

Bensouda, Nabil. "Extending and formalizing the energy signature method for calibrating simulations and illustrating with application for three California climates." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1080.

Full text
Abstract:
This thesis extends and formalizes the energy signature method developed by Wei et al. (1998) for the rapid calibration of cooling and heating energy consumption simulations for commercial buildings. This method is based on the use of "calibration signatures" which characterize the difference between measured and simulated performance. By creating a library of shapes for certain known errors, clues can be provided to the analyst to use in identifying what simulation input errors may be causing the discrepancies. These are referred to as "characteristic signatures". In this thesis, sets of characteristic signatures are produced for the climates typified by Pasadena, Sacramento and Oakland, California for each of the four major system types: single-duct variable-air-volume, single-duct constant-volume, dual-duct variable-air-volume and dual-duct constant-volume. A detailed step-by-step description is given for the proposed methodology, and two examples and a real-world case study serve to illustrate the use of the signature method.
APA, Harvard, Vancouver, ISO, and other styles
5

Andersson, Sara. "Mapping Uncertainties – A case study on a hydraulic model of the river Voxnan." Thesis, KTH, Mark- och vattenteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-173848.

Full text
Abstract:
This master thesis gives an account for the numerous uncertainties that prevail one-dimensional hydraulic models and flood inundation maps, as well as suitable assessment methods for different types of uncertainties. A conducted uncertainty assessment on the river Voxnan in Sweden has been performed. The case study included the calibra-tion uncertainty in the spatially varying roughness coefficient and the boundary condi-tion uncertainty in the magnitude of a 100-year flood, in present and future climate conditions. By combining a scenario analysis, GLUE calibration method and Monte Carlo analysis, the included uncertainties with different natures could be assessed. Significant uncer-tainties regarding the magnitude of a 100-year flood from frequency analysis was found. The largest contribution to the overall uncertainty was given by the variance between the nine global climate models, emphasizing the importance of including projections from an ensemble of models in climate change studies. Furthermore, the study gives a methodological example on how to present uncertainty estimates visually in probabilistic flood inundation maps. The conducted method of how the climate change uncertainties, scenarios and models, were handled in frequency analysis is also suggested to be a relevant result of the study.
APA, Harvard, Vancouver, ISO, and other styles
6

Liang, Dong Cowles Mary Kathryn. "Issues in Bayesian Gaussian Markov random field models with application to intersensor calibration." Iowa City : University of Iowa, 2009. http://ir.uiowa.edu/etd/400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ben, Touhami Haythem. "Calibration Bayésienne d'un modèle d'étude d'écosystème prairial : outils et applications à l'échelle de l'Europe." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22444/document.

Full text
Abstract:
Les prairies représentent 45% de la surface agricole en France et 40% en Europe, ce qui montre qu’il s’agit d’un secteur important particulièrement dans un contexte de changement climatique où les prairies contribuent d’un côté aux émissions de gaz à effet de serre et en sont impactées de l’autre côté. L’enjeu de cette thèse a été de contribuer à l’évaluation des incertitudes dans les sorties de modèles de simulation de prairies (et utilisés dans les études d’impact aux changements climatiques) dépendant du paramétrage du modèle. Nous avons fait appel aux méthodes de la statistique Bayésienne, basées sur le théorème de Bayes, afin de calibrer les paramètres d’un modèle référent et améliorer ainsi ses résultats en réduisant l’incertitude liée à ses paramètres et, par conséquent, à ses sorties. Notre démarche s’est basée essentiellement sur l’utilisation du modèle d’écosystème prairial PaSim, déjà utilisé dans plusieurs projets européens pour simuler l’impact des changements climatiques sur les prairies. L’originalité de notre travail de thèse a été d’adapter la méthode Bayésienne à un modèle d’écosystème complexe comme PaSim (appliqué dans un contexte de climat altéré et à l’échelle du territoire européen) et de montrer ses avantages potentiels dans la réduction d’incertitudes et l’amélioration des résultats, en combinant notamment méthodes statistiques (technique Bayésienne et analyse de sensibilité avec la méthode de Morris) et outils informatiques (couplage code R-PaSim et utilisation d’un cluster de calcul). Cela nous a conduit à produire d’abord un nouveau paramétrage pour des sites prairiaux soumis à des conditions de sécheresse, et ensuite à un paramétrage commun pour les prairies européennes. Nous avons également fourni un outil informatique de calibration générique pouvant être réutilisé avec d’autres modèles et sur d’autres sites. Enfin, nous avons évalué la performance du modèle calibré par le biais de la technique Bayésienne sur des sites de validation, et dont les résultats ont confirmé l’efficacité de cette technique pour la réduction d’incertitude et l’amélioration de la fiabilité des sorties
Grasslands cover 45% of the agricultural area in France and 40% in Europe. Grassland ecosystems have a central role in the climate change context, not only because they are impacted by climate changes but also because grasslands contribute to greenhouse gas emissions. The aim of this thesis was to contribute to the assessment of uncertainties in the outputs of grassland simulation models, which are used in impact studies, with focus on model parameterization. In particular, we used the Bayesian statistical method, based on Bayes’ theorem, to calibrate the parameters of a reference model, and thus improve performance by reducing the uncertainty in the parameters and, consequently, in the outputs provided by models. Our approach is essentially based on the use of the grassland ecosystem model PaSim (Pasture Simulation model) already applied in a variety of international projects to simulate the impact of climate changes on grassland systems. The originality of this thesis was to adapt the Bayesian method to a complex ecosystem model such as PaSim (applied in the context of altered climate and across the European territory) and show its potential benefits in reducing uncertainty and improving the quality of model outputs. This was obtained by combining statistical methods (Bayesian techniques and sensitivity analysis with the method of Morris) and computing tools (R code -PaSim coupling and use of cluster computing resources). We have first produced a new parameterization for grassland sites under drought conditions, and then a common parameterization for European grasslands. We have also provided a generic software tool for calibration for reuse with other models and sites. Finally, we have evaluated the performance of the calibrated model through the Bayesian technique against data from validation sites. The results have confirmed the efficiency of this technique for reducing uncertainty and improving the reliability of simulation outputs
APA, Harvard, Vancouver, ISO, and other styles
8

Battisti, Rafael. "Calibration, uncertainties and use of soybean crop simulation models for evaluating strategies to mitigate the effects of climate change in Southern Brazil." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11152/tde-03102016-162340/.

Full text
Abstract:
The water deficit is a major factor responsible for the soybean yield gap in Southern Brazil and tends to increase under climate change. Crop models are a tool that differ on levels of complexity and performance and can be used to evaluate strategies to manage crops, according the climate conditions. Based on that, the aims of this study were: to assess five soybean crop models and their ensemble; to evaluate the sensitivity of these models to systematic changes in climate; to assess soybean adaptive traits to water deficit for current and future climate; and to evaluate how the crop management contribute to soybean yields under current and future climates. The crop models FAO - Agroecological Zone, AQUACROP, DSSAT CSM-CROPGRO-Soybean, APSIM Soybean, and MONICA were assessed. These crop models were calibrated using experimental data obtained during 2013/2014 growing season in different sites, sowing dates and crop conditions (rainfed and irrigated). For the sensitivity analysis was considered climate changes on air temperature, [CO2], rainfall and solar radiation. For adapting traits to drought, the soybean traits manipulated only in DSSAT CSM-CROPGRO-Soybean were deeper root depth, maximum fraction of shoot dry matter diverted to root growth under water stress, early reduction of transpiration, transpiration limited as a function of vapor pressure deficit, N2 fixation drought tolerance and reduced acceleration of grain filling period in response to water deficit. The crop management options strategies evaluated were irrigation, sowing date, cultivar maturity group and planting density. The estimated yield had root mean square error (RMSE) varying between 553 kg ha-1 and 650 kg ha-1, with d indices always higher than 0.90 for all models. The best performance was obtained when an ensemble of all models was considered, reducing yield RMSE to 262 kg ha-1. The crop models had different sensitivity level for climate scenario, reduction yield with temperature increase, higher rate of reduction of yield with lower rainfall than increase of yield with higher rainfall amount, different yields response with solar radiation changes due to baseline climate and model, and an asymptotic soybean response to increase of [CO2]. Combining the climate scenarios, the yield was affected mainly by reduction of rainfall (increase of solar radiation), while temperature and [CO2] interaction showed compensation effect on yield losses and gains. The trait deeper rooting profile had greater improvement in total production for the Southern Brazil, with increase of 3.3 % and 4.0 %, respectively, for the current and future climates. For soybean management, in most cases, the models showed that no crop management strategy has a clear tendency to result in better yields in the future if shift from the best management of current climate. This way, the crop models showed different performance against observed data, where the model parametrization and structure affected the response to alternatives managements to climate change. Although these uncertainties, crop models and their ensemble are an important tool to evaluate impact of climate change and alternatives to mitigation.
O déficit hídrico é o principal fator causador de perda de produtividade para a soja no Centro-Sul do Brasil e tende a aumentar com as mudanças climáticas. Alternativas de mitigação podem ser avaliadas usando modelos de simulação de cultura, os quais diferem em nível de complexidade e desempenho. Baseado nisso, os objetivos desse estudo foram: avaliar cinco modelos de simulação para a soja e a média desses modelos; avaliar a sensibilidade dos modelos a mudança sistemática do clima; avaliar características adaptativas da soja ao déficit hídrico para o clima atual e futuro; e avaliar a resposta produtiva de manejos da soja para o clima atual e futuro. Os modelos utilizados foram FAO - Zona Agroecológica, AQUACROP, DSSAT CSM-CROPGRO-Soybean, APSIM Soybean e MONICA. Os modelos foram calibrados a partir de dados experimentais obtidos na safra 2013/2014 em diferentes locais e datas de semeadura sob condições irrigadas e de sequeiro. Na análise de sensibilidade foram modificadas a temperatura do ar, [CO2], chuva e radiação solar. Para as características de tolerância ao déficit hídrico foram manipulados, apenas no modelo DSSAT CSMCROPGRO- Soybean, a distribuição do sistema radicular, biomassa divergida para crescimento radicular sob déficit hídrico, redução antecipada da transpiração, limitação da transpiração em função do déficit de pressão de vapor, fixação de N2 sob déficit hídrico e redução da aceleração do ciclo devido ao déficit hídrico. Os manejos avaliados foram irrigação, data de semeadura, ciclo de cultivar e densidade de semeadura. A produtividade estimada obteve raiz do erro médio quadrático (REMQ) variando entre 553 kg ha-1 e 650 kg ha-1, com índice d acima de 0.90 para todos os modelos. O melhor desempenho foi obtido utilizando a média de todos os modelos, com REMQ de 262 kg ha-1. Os modelos obtiveram diferentes níveis de sensibilidade aos cenários climáticos, reduzindo a produtividade com aumento da temperatura, maior taxa de redução da produtividade com menor quantidade de chuva do que aumento de produtividade com maior quantidade de chuva, diferentes respostas com a mudança da radiação solar em função do clima local e do modelo, e resposta positiva assimptótica para o aumento da concentração de [CO2]. Quando combinado as mudanças dos cenários, a produtividade foi afetada principalmente pela redução da chuva (aumento da radiação solar), enquanto a mudança na temperatura e [CO2] mostrou compensação nas perdas e ganhos. A distribuição do sistema radicular foi o mecanismo de tolerância ao déficit hídrico com maior ganho de produtividade, representando ganho total na produção de 3,3 % e 4,0% para a região, respectivamente, para o clima atual e futuro. Para os manejos não se observou melhores resultados com a mudança do manejo para o futuro em relação a melhor condição para o clima atual. Desta forma, os modelos mostraram diferentes desempenho, em que a parametrização e a estrutura do modelo afetaram a resposta das alternativas avaliadas para mudanças climáticas. Apesar das incertezas, os modelos de cultura são uma importante ferramenta para avaliar o impacto e alternativas de mitigação as mudanças climáticas.
APA, Harvard, Vancouver, ISO, and other styles
9

Dinh, Thi Lan Anh. "Crop yield simulation using statistical and machine learning models. From the monitoring to the seasonal and climate forecasting." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS425.

Full text
Abstract:
La météo et le climat ont un impact important sur les rendements agricoles. De nombreuses études basées sur différentes approches ont été réalisées pour mesurer cet impact. Cette thèse se concentre sur les modèles statistiques pour mesurer la sensibilité des cultures aux conditions météorologiques sur la base des enregistrements historiques. Lors de l'utilisation d'un modèle statistique, une difficulté critique survient lorsque les données sont rares, ce qui est souvent le cas pour la modélisation des cultures. Il y a un risque élevé de sur-apprentissage si le modèle n'est pas développé avec certaine précautions. Ainsi, la validation et le choix du modèle sont deux préoccupations majeures de cette thèse. Deux approches statistiques sont développées. La première utilise la régression linéaire avec régularisation et validation croisée (c.-à.-d. leave-one-out ou LOO), appliquée au café robusta dans la principale région productrice de café du Vietnam. Le café est une culture rémunératrice, sensible aux intempéries, et qui a une phénologie très complexe en raison de sa nature pérenne. Les résultats suggèrent que les informations sur les précipitations et la température peuvent être utilisées pour prévoir l'anomalie de rendement avec une anticipation de 3 à 6 mois selon la région. Les estimations du rendement du robusta à la fin de la saison montrent que les conditions météorologiques expliquent jusqu'à 36 % des anomalies de rendement historiques. Cette première approche de validation par LOO est largement utilisée dans la littérature, mais elle peut être mal utilisé pour de nombreuses raisons : elle est technique, mal interprétée et nécessite de l'expérience. Une alternative, l'approche “leave-two-out nested cross-validation” (ou LTO), est proposée pour choisir le modèle approprié, évaluer sa véritable capacité de généralisation et choisir la complexité du modèle optimale. Cette méthode est sophistiquée mais simple. Nous démontrons son applicabilité pour le café robusta au Vietnam et le maïs en France. Dans les deux cas, un modèle plus simple avec moins de prédicteurs potentiels et d'entrées est plus approprié. Utiliser uniquement la méthode LOO peut être très trompeur car cela encourage à choisir un modèle qui sur-apprend les données de manière indirecte. L'approche LTO est également utile dans les applications de prévision saisonnière. Les estimations de rendement du maïs en fin de saison suggèrent que les conditions météorologiques peuvent expliquer plus de 40 % de la variabilité de l'anomalie de rendement en France. Les impacts du changement climatique sur la production de café au Brésil et au Vietnam sont également étudiés à l'aide de simulations climatiques et de modèles d'impact ou “suitability models”. Les données climatiques sont cependant biaisées par rapport au climat réel. De nombreuses méthodes de “correction de biais” (appelées ici “calibration”) ont été introduites pour corriger ces biais. Une présentation critique et détaillée de ces calibrations dans la littérature est fournie pour mieux comprendre les hypothèses, les propriétés et les objectifs d'application de chaque méthode. Les simulations climatiques sont ensuite calibrées par une méthode basée sur les quantiles avant d'être utilisées sur nos modèles d'impact. Ces modèles sont développés sur la base des données de recensement des zones caféières, et les variables climatiques potentielles sont basées sur un examen des études précédentes utilisant des modèles d'impact pour le café et des recommandations d'experts. Les résultats montrent que les zones propices à l'arabica au Brésil pourraient diminuer d'environ 26 % d'ici le milieu du siècle dans le scénario à fortes émissions, les régions compatibles avec la culture du robusta vietnamien pourraient quant à elle diminué d'environ 60 %. Les impacts sont significatifs à basse altitude pour les deux types de café, suggérant des déplacements potentiels de la production vers des endroits plus élevés
Weather and climate strongly impact crop yields. Many studies based on different techniques have been done to measure this impact. This thesis focuses on statistical models to measure the sensitivity of crops to weather conditions based on historical records. When using a statistical model, a critical difficulty arises when data is scarce, which is often the case with statistical crop modelling. There is a high risk of overfitting if the model development is not done carefully. Thus, careful validation and selection of statistical models are major concerns of this thesis. Two statistical approaches are developed. The first one uses linear regression with regularization and leave-one-out cross-validation (or LOO), applied to Robusta coffee in the main coffee-producing area of Vietnam (i.e. the Central Highlands). Coffee is a valuable commodity crop, sensitive to weather, and has a very complex phenology due to its perennial nature. Results suggest that precipitation and temperature information can be used to forecast the yield anomaly with 3–6 months' anticipation depending on the location. Estimates of Robusta yield at the end of the season show that weather explains up to 36 % of historical yield anomalies. The first approach using LOO is widely used in the literature; however, it can be misused for many reasons: it is technical, misinterpreted, and requires experience. As an alternative, the “leave-two-out nested cross-validation” (or LTO) approach, is proposed to choose the suitable model and assess its true generalization ability. This method is sophisticated but straightforward; its benefits are demonstrated for Robusta coffee in Vietnam and grain maize in France. In both cases, a simpler model with fewer potential predictors and inputs is more appropriate. Using only the LOO method, without any regularization, can be highly misleading as it encourages choosing a model that overfits the data in an indirect way. The LTO approach is also useful in seasonal forecasting applications. The end-of-season grain maize yield estimates suggest that weather can account for more than 40 % of the variability in yield anomaly. Climate change's impacts on coffee production in Brazil and Vietnam are also studied using climate simulations and suitability models. Climate data are, however, biased compared to the real-world climate. Therefore, many “bias correction” methods (called here instead “calibration”) have been introduced to correct these biases. An up-to-date review of the available methods is provided to better understand each method's assumptions, properties, and applicative purposes. The climate simulations are then calibrated by a quantile-based method before being used in the suitability models. The suitability models are developed based on census data of coffee areas, and potential climate variables are based on a review of previous studies using impact models for coffee and expert recommendations. Results show that suitable arabica areas in Brazil could decrease by about 26 % by the mid-century in the high-emissions scenario, while the decrease is surprisingly high for Vietnamese Robusta coffee (≈ 60 %). Impacts are significant at low elevations for both coffee types, suggesting potential shifts in production to higher locations. The used statistical approaches, especially the LTO technique, can contribute to the development of crop modelling. They can be applied to a complex perennial crop like coffee or more industrialized annual crops like grain maize. They can be used in seasonal forecasts or end-of-season estimations, which are helpful in crop management and monitoring. Estimating the future crop suitability helps to anticipate the consequences of climate change on the agricultural system and to define adaptation or mitigation strategies. Methodologies used in this thesis can be easily generalized to other cultures and regions worldwide
APA, Harvard, Vancouver, ISO, and other styles
10

Martínez, Asensio Adrián. "Impact of large-scale atmospheric variability on sea level and wave climate." Doctoral thesis, Universitat de les Illes Balears, 2015. http://hdl.handle.net/10803/371456.

Full text
Abstract:
This thesis aims at quantitatively characterizing the recent (last few decades) and future climate variability of marine climate in the Western Mediterranean Sea and the North Atlantic Ocean. Namely it focuses on sea level and wind-waves, as these are the variables with a larger potential impact on coastal ecosystems and infrastructures. We first use buoy and altimetry data to calibrate a 50-year wind-wave hindcast over the Western Mediterranean in order to obtain the best characterization of the wave climate over that region. The minimization of the differences with respect to observations through a non-linear transformation of the Empirical Orthogonal Functions of the modelled fields results in an improvement of the hindcast, according to a validation test carried out with independent observations. We then focus on the relationship between the large scale atmospheric forcing and our target variables. Namely we quantify and explore the cause-effect relations between the major modes of atmospheric variability over the North Atlantic and Europe, i.e. the North Atlantic Oscillation, the East Atlantic pattern, the East Atlantic Western Russian pattern and the Scandinavian pattern, and both the Mediterranean sea level and the North Atlantic wave climate. To do so, we use data from different sets of observations and numerical models, including tide gauges, wave buoys, altimetry, hydrography and numerical simulations. Our results point to the North Atlantic Oscillation as the mode with the largest impact on both, Mediterranean sea level (due to the local and remote influence on its atmospheric component) and the North Atlantic wave climate (due to its effect on both the wind-sea and swell components). Other climate indices have smaller but still meaningful contributions; e.g. the East Atlantic pattern plays a significant role in the wave climate variability through its impact on the swell component. Finally, we explore the performance of statistical models to project the future wave climate over the North Atlantic under global warming scenarios, including the large scale climate modes as predictors together with other variables such as atmospheric pressure and wind speed. Notably, we highlight that the use of wind speed as statistical predictor is essential to reproduce the dynamically projected long-term trends.
Esta tesis caracteriza cuantitativamente la variabilidad climática reciente (las últimas décadas) y futura del clima marino en el Mar Mediterráneo y en el Océano Atlántico Norte. Concretamente, se centra en el nivel del mar y en el oleaje, ya que éstas son las variables con un mayor impacto potencial en ecosistemas e infraestructuras costeras. En primer lugar, utilizamos datos de boyas y altimetría para calibrar un hindcast de oleaje de 50 años en el Mediterráneo Occidental, con el objetivo de obtener la mejor caracterización climática del oleaje sobre esta región. La minimización de las diferencias con respecto a las observaciones a través de una transformación no lineal de las Funciones Empíricas Ortogonales de los campos modelados se traduce en una mejora del hindcast, de acuerdo al test de validación llevado a cabo con observaciones independientes. Luego nos centramos en las relaciones entre el forzamiento atmosférico de gran escala y nuestras variables de interés. En concreto, cuantificamos y exploramos las relaciones causa-efecto entre los modos de variabilidad atmosférica más importantes del Atlántico Norte y Europa (la Oscilación del Atlántico Norte, el patrón del Atlántico Oriental, el patrón del Atlántico Oriental/Rusia Occidental y el patrón Escandinavo) y el nivel del mar del Mediterráneo y el oleaje del Atlántico Norte. Para ello, usamos datos de diferentes conjuntos de observaciones y modelos numéricos, incluyendo mareógrafos, boyas de oleaje, altimetría, hidrografía y simulaciones numéricas. Nuestros resultados señalan la Oscilación del Atlántico Norte como el modo de mayor impacto, tanto en el nivel del mar del Mediterráneo (debido a la influencia local y remota en su componente atmosférica) como en el oleaje del Atlántico Norte (debido a su efecto en las componentes de mar de viento y de mar de fondo). Otros índices climáticos tienen contribuciones más pequeñas pero todavía significativas; e.g. el patrón del Atlántico Oriental juega un papel importante en la variabilidad del oleaje a través de su impacto en la componente de mar de fondo. Finalmente, exploramos la capacidad de los modelos estadísticos de proyectar el clima futuro del oleaje sobre el Atlántico Norte bajo escenarios de calentamiento global, incluyendo los modos climáticos de gran escala como predictores junto con otras variables como la presión atmosférica y la velocidad del viento. En particular, destacamos que el uso de la velocidad del viento como predictor estadístico es esencial para reproducir las tendencias a largo plazo proyectadas de por los modelos dinámicos.
APA, Harvard, Vancouver, ISO, and other styles
11

Huang, Jian. "Assessing predictive performance and transferability of species distribution models for freshwater fish in the United States." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73477.

Full text
Abstract:
Rigorous modeling of the spatial species distributions is critical in biogeography, conservation, resource management, and assessment of climate change. The goal of chapter 2 of this dissertation was to evaluate the potential of using historical samples to develop high-resolution species distribution models (SDMs) of stream fishes of the United States. I explored the spatial transferability and temporal transferability of stream–fish distribution models in chapter 3 and chapter 4 respectively. Chapter 2 showed that the discrimination power of SDMs for 76 non-game fish species depended on data quality, species' rarity, statistical modeling technique, and incorporation of spatial autocorrelation. The area under the Receiver-Operating-Characteristic curve (AUC) in the cross validation tended to be higher in the logistic regression and boosted regression trees (BRT) than the presence-only MaxEnt models. AUC in the cross validation was also higher for species with large geographic ranges and small local populations. Species prevalence affected discrimination power in the model training but not in the validation. In chapter 3, spatial transferability of SDMs was low for over 70% of the 21 species examined. Only 24% of logistic regression, 12% of BRT, and 16% of MaxEnt had AUC > 0.6 in the spatial transfers. Friedman's rank sum test showed that there was no significant difference in the performance of the three modeling techniques. Spatial transferability could be improved by using spatial logistic regression under Lasso regularization in the training of SDMs and by matching the range and location of predictor variables between training and transfer regions. In chapter 4, testing of temporal SDM transfer on independent samples resulted in discrimination power of the moderate to good range, with AUC > 0.6 for 80% of species in all three types of models. Most cool water species had good temporal transferability. However, biases and misspecified spread occurred frequently in the temporal model transfers. To reduce under- or over-estimation bias, I suggest rescaling the predicted probability of species presence to ordinal ranks. To mitigate inappropriate spread of predictions in the climate change scenarios, I recommended to use large training datasets with good coverage of environmental gradients, and fine-tune predictor variables with regularization and cross validation.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Kalný, Richard. "Vliv změny klimatu na energetickou náročnost a vnitřní prostředí budov." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-409868.

Full text
Abstract:
This thesis examines the impacts of possible climate change on selected buildings. For simulations in program BSim the author uses climatic data of SRES scenarios, specifically models B1, A1B and A2. It also includes a research on global warming, design and optimization of the measurement and control system at the production hall and a part of the energy audit for the office building.
APA, Harvard, Vancouver, ISO, and other styles
13

Haussamer, Nicolai Haussamer. "Model Calibration with Machine Learning." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29451.

Full text
Abstract:
This dissertation focuses on the application of neural networks to financial model calibration. It provides an introduction to the mathematics of basic neural networks and training algorithms. Two simplified experiments based on the Black-Scholes and constant elasticity of variance models are used to demonstrate the potential usefulness of neural networks in calibration. In addition, the main experiment features the calibration of the Heston model using model-generated data. In the experiment, we show that the calibrated model parameters reprice a set of options to a mean relative implied volatility error of less than one per cent. The limitations and shortcomings of neural networks in model calibration are also investigated and discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Coelho, Caio Augusto dos Santos. "Forecast calibration and combination : Bayesian assimilation of seasonal climate predictions." Thesis, University of Reading, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tachet, des combes Rémi. "Non-parametric model calibration in finance." Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658766.

Full text
Abstract:
Consistently fitting vanilla option surfaces is an important issue when it comes to modelling in finance. In three different models: local and stochastic volatility, local correlation and hybrid local volatility with stochstic rates, this calibration boils down to the resolution of a nonlinear partial integro-differential equation. In a first part, we give existence results of solutions for the calibration equation. They are based upon fixed point methods in Hölder spaces and short-time a priori estimates. We then apply those existence results to the three models previously mentioned and give the calibration obtained when solving the pde numerically. At last, we focus on the algorithm used for the resolution: an ADI predictor/corrector scheme that needs to be modified to take into account the nonlinear term. We also study an instability phenomenon that occurs in certain cases for the local and stochastic volatility model. Using Hadamard's theory, we try to offer a theoretical explanation to the instability
APA, Harvard, Vancouver, ISO, and other styles
16

Rackauckas, Christopher V. "The Jormungand Climate Model." Oberlin College Honors Theses / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1368151558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Karim, Ali Abdul Jabbar, Johan Lessner, and Mehrdad Moridnejad. "Model calibration of a wooden building block." Thesis, Linnéuniversitetet, Institutionen för bygg- och energiteknik (BE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-26271.

Full text
Abstract:
Constructing multi floor buildings by light weight material have increased recently. There are many advantages of using light weight material, such as wood, for the environment. However, one of the deficiencies of lightweight material is the acoustic performance. Transmission of sound and vibration through floors in multi floor buildings in wood is a drawback to be considered. There are many studies that have addressed this issue. It is most common to make a finite element models well as experiments in laboratory. In these studies the material properties in the FE model are probably often adjusted to correlate to the laboratory experiments, since there is a large spread in material properties found in literature. This thesis however tries to elaborate on the actual material properties of the included wooden elements. Dynamic testing is done to determine the spread (here spread means gap between material properties) in material properties of wooden elements. The materials tested are chipboards and two types of wooden beams. The examined beams are both normal wooden beams and laminated veneer lumber beams. When the dynamic behaviour is known for the wooden parts, they are assembled to two small floor systems. The floor systems consist of four beams and one wooden board. The assembly is dynamically tested in laboratory and in FE software. The FE model used the known material properties for each individual building part. The results from the FE model correlate well with the laboratory tests. This shows that when material properties are known a FE model can predict the real behaviour. However, the examined material properties show a large spread from beam to beam, etc and a better knowledge about the material properties of used wooden parts is needed.
Att bygga flervåningshus med lätta byggmaterial har blivit allt vanligare. Det finns många fördelar med att använda lätta material, såsom trä. En av fördelarna är att det är skonsamt för miljön. Emellertid är en av bristerna i lättviktsmaterial den akustiska prestandan. Överföring av ljud och vibrationer genom golv i flervåningshus i trä är en nackdel att överväga. Det finns flera studier som har behandlat denna fråga. Ofta görs finita element modeller samt tester i laboratorium. I dessa studier justerar man materialegenskaperna i FE-modellen för att korrelera mot laboratorieexperiment. Detta eftersom det finns en stor spridning i materialegenskaperna för trä i litteraturen. Med detta examensarbete, undersöks de faktiska materialegenskaperna hos träelementen genom försök. Dynamiska tester utförs för att bestämma spridningen i materialegenskaper. De testade materialen är spånskivor och två typer av träbalkar. De undersökta balkarna är både normala träreglar och laminerade faner balkar. När det dynamiska beteendet är känt för trädelarna, monteras de ihop till två små golvsystem. Golvsystemen består av fyra balkar och en träskiva. Den assemblerade modellen testas både dynamiskt i ett praktiskt försök och i ett FE program. I FEmodellen används de tidigare framtagna faktiska materialegenskaper för varje ingående enskild byggnadsdel. Resultaten från FE-modellen korrelerar väl med de praktiska experimenten. Med detta examensarbete visas att när materialegenskaperna är kända kan FE-modellen förutsäga det verkliga beteendet. De undersökta materialegenskaperna visar dock en stor spridning från balk till balk, etc. och mer kunskap om materialegenskaper hos trädelar behövs.
APA, Harvard, Vancouver, ISO, and other styles
18

Tsujimoto, Tsunehiro. "Calibration of the chaotic interest rate model." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/2568.

Full text
Abstract:
In this thesis we establish a relationship between the Potential Approach to interest rates and the Market Models. This relationship allows us to derive the dynamics of forward LIBOR rates and forward swap rates by modelling the state price density. It means that we are able to secure the arbitrage-free condition and positive interest rate feature when we model the volatility drifts of those dynamics. On the other hand, we develop the Potential Approach, particularly the Hughston-Rafailidis Chaotic Interest Rate Model. The early argument enables us to infer that the Chaos Models belong to the Stochastic Volatility Market Models. In particular, we propose One-variable Chaos Models with the application of exponential polynomials. This maintains the generality of the Chaos Models and performs well for yield curves comparing with the Nelson-Siegel Form and the Svensson Form. Moreover, we calibrate the One-variable Chaos Model to European Caplets and European Swaptions. We show that the One-variable Chaos Models can reproduce the humped shape of the term structure of caplet volatility and also the volatility smile/skew curve. The calibration errors are small compared with the Lognormal Forward LIBOR Model, the SABR Model, traditional Short Rate Models, and other models under the Potential Approach. After the calibration, we introduce some new interest rate models under the Potential Approach. In particular, we suggest a new framework where the volatility drifts can be indirectly modelled from the short rate via the state price density.
APA, Harvard, Vancouver, ISO, and other styles
19

Fadikar, Arindam. "Stochastic Computer Model Calibration and Uncertainty Quantification." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91985.

Full text
Abstract:
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation.
Doctor of Philosophy
Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
APA, Harvard, Vancouver, ISO, and other styles
20

Lage, Rodrigues Luis Ricardo. "Calibration and combination of seasonal climate predictions in tropical and extratropical regionals." Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/395193.

Full text
Abstract:
Current technology allows the proliferation of multiple forecast systems developed by different research institutions from all over the world. However, most decision makers need a reliable probabilistic prediction instead of a set of predictions to take an action given the probability of an event to occur. Several studies have shown that the merging of predictions derived from several forecast systems with equal weights yields on average better predictions than the best single forecast system. This approach has been referred to as the simple multimodel (SMM). Nevertheless, none of these studies has shown the existence of a combination method that systematically produces the best predictions. Therefore, this thesis aims at applying different statistical techniques to combine predictions derived from different statistical and dynamical forecast systems to assess whether the performance of the SMM can be improved. These techniques combine the predictions assigning unequal weights to the different forecast systems based on their past performance. A unique feature of this study is the broad nature of the forecast quality assessment, performed using multiple deterministic and probabilistic verification measures and the same verifying observations. This allows comparing the predictions produced by the different combination methods and forecast systems in a coherent way. Besides, most of the forecast systems used in this study are either publicly available or could be easily implemented by the user. This thesis focuses on seasonal prediction of sea surface temperature (SST), near-surface temperature and precipitation in tropical and extratropical regions. It is shown that the predictions of the SMM are often better than the combination methods that assign unequal weights. The difficulty in the robust estimation of the weights due to the small samples available is one of the reasons that limit the potential benefit of the combination methods that assign unequal weights. However, some of the results illustrate under which conditions combination methods that assign unequal weights improve with respect to the SMM predictions. For instance, the combination methods that assign unequal weights improve over the SMM predictions when only a fraction of all single forecast systems have skill as shown for some of the predictions of SST. On the other hand, it is shown that there are cases when combining many forecast systems does not lead to improved forecasts when compared to the best single forecast system. This suggests that a multimodel approach is not necessarily better than a highly skillful forecast system, which highlights the importance of continuously assessing the forecast quality for the specific application of the user.
La tecnología existente permite la proliferación de varios sistemas de predicción, desarrollados por diferentes instituciones de investigación de todo el mundo. Sin embargo, la mayoría de los tomadores de decisión generalmente necesitan una única predicción probabilística fiable para tomar una acción dada la probabilidad de ocurrencia de un evento. En este sentido, varios estudios han demostrado que la combinación de predicciones derivadas de varios sistemas de predicción resulta, en promedio, en una mejor predicción cuando se compara con la predicción del mejor sistema de predicción. No obstante, ninguno de estos estudios ha demostrado la existencia de un método de combinación que produzca las mejores predicciones. Por lo tanto, esta tesis tiene el objetivo de aplicar diferentes técnicas estadísticas para combinar predicciones climáticas estacionales derivadas de diferentes sistemas de predicción. Algunas de estas técnicas ponen pesos desiguales a los diferentes sistemas de predicción teniendo en cuenta su calidad en un período pasado y una de ellas combina todos los sistemas de predicción sin poner pesos. Esta última será referenciada como "simple multimodel" (SMM). Un punto importante de este estudio es el amplio carácter de la verificación de la calidad de las predicciones, ya que se usan varias métricas deterministas y probabi I ísti cas y las mismas observaciones. Esta tesis se centra en la predicción estacional de la temperatura de la superficie del mar (TSM), la temperatura atmosférica próxima a la superficie y la precipitación, en regiones tropicales y extratropicales. Pudimos comprobar que las predicciones de la SMM son frecuentemente mejores que las que derivan de métodos de combinación con coeficientes desiguales a los sistemas de predicción. La dificultad a la hora de estimar pesos robustos, debido sobre todo a las pequeñas muestras disponibles, es una de las razones que limita la robustez de las medidas que estiman el beneficio relativo de los métodos de combinación. Sin embargo, hay algunas situaciones en las que los métodos de combinación con coeficientes desiguales son mejores. Se encontró también situaciones específicas en las que la combinación de predicciones de varios sistemas de predicción no mejora la predicción del mejor sistema.
APA, Harvard, Vancouver, ISO, and other styles
21

Armani, Silvia. "High-enthalpy geothermal reservoir model calibration using PEST." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13293/.

Full text
Abstract:
The main purpose of this thesis work is focused on the use of PEST (Parameter Estimation) to calibrate numerical models of High Enthalpy Geothermal Reservoirs (HEGR). PEST is a parameter estimation and analysis of the uncertainties of complex numerical models tool, that can be instructed to work with a standalone simulator. So, the T2Well-EWASG was used as coupled wellbore-reservoir simulator for multiphase-multicomponent HEGR. The idea of this thesis work is that the possibility to implement some automation degrees in the wellbore-reservoir model calibration task would improve substantially the Reservoir Engineers work. To become familiar with PEST, it has been necessary a preliminary training to learn how to manage its input files, its keywords, and the utility programs having the function of verifying the correctness and consistency of the created files. Then, one of the examples of PEST manual (which Fortran source code is supplied) was reproduced and analyzed, and subsequently modified. In particular, starting from this example, a simple linear model with two free parameters, some changes have been performed: "fixing" a parameter to inhibit its change during the calibration; reading a more complex model output file respect to the original example; inserting dummy data that should not be processed and instructing PEST to consider only the data of interest; changing the model adding parameters to be calibrated, and including them in the analysis changing the PEST inputs files. Finally, these skills were applied to use PEST with T2Well-EWASG to calibrate a numerical model, relative to a real HEGR, previously calibrated via a trial and error approach in a PhD thesis work. Among the real data used there were also short production-tests done in a geothermal field located in the Dominica Commonwealth. The preliminary results show that the PEST-T2Well-EWASG calibration system works fine, and that it is a useful tool that can improve the work of reservoir engineering.
APA, Harvard, Vancouver, ISO, and other styles
22

Sjöholm, Daniel. "Calibration using a general homogeneous depth camera model." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204614.

Full text
Abstract:
Being able to accurately measure distances in depth images is important for accurately reconstructing objects. But the measurement of depth is a noisy process and depth sensors could use additional correction even after factory calibration. We regard the pair of depth sensor and image sensor to be one single unit, returning complete 3D information. The 3D information is combined by relying on the more accurate image sensor for everything except the depth measurement. We present a new linear method of correcting depth distortion, using an empirical model based around the constraint of only modifying depth data, while keeping planes planar. The depth distortion model is implemented and tested on the Intel RealSense SR300 camera. The results show that the model is viable and generally decreases depth measurement errors after calibrating, with an average improvement in the 50 percent range on the tested data sets.
Att noggrant kunna mäta avstånd i djupbilder är viktigt för att kunna göra bra rekonstruktioner av objekt. Men denna mätprocess är brusig och dagens djupsensorer tjänar på ytterligare korrektion efter fabrikskalibrering. Vi betraktar paret av en djupsensor och en bildsensor som en enda enhet som returnerar komplett 3D information. 3D informationen byggs upp från de två sensorerna genom att lita på den mer precisa bildsensorn för allt förutom djupmätningen. Vi presenterar en ny linjär metod för att korrigera djupdistorsion med hjälp av en empirisk modell, baserad kring att enbart förändra djupdatan medan plana ytor behålls plana. Djupdistortionsmodellen implementerades och testades på kameratypen Intel RealSense SR300. Resultaten visar att modellen fungerar och i regel minskar mätfelet i djupled efter kalibrering, med en genomsnittlig förbättring kring 50 procent för de testade dataseten.
APA, Harvard, Vancouver, ISO, and other styles
23

Salinas, Daniel Villa. "Calibration of a mixing model for sublevel caving." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43804.

Full text
Abstract:
Sublevel caving (SLC) is an underground mass mining method where the orebody is divided into a regular network of tunnels. The ore is extracted by level working downwards through the orebody. The caved waste from the overlying rock mass fills the void created by ore extraction generating a dynamic mixing situation between the broken ore and the waste (dilution) from upper levels. The dynamic process of mixing creates a significant challenge in the SLC project to estimate grades reliably. PCSLC is an application developed by Gemcom Software specifically designed for the mine planning of Sub Level Caving projects and operations. It incorporates a rich set of tools to assist with the whole design and planning process including a sophisticated mixing models, it can simulates the material flow observed in caving mines using a technique known as Template Mixing, but due to the complexity that it represents, it is extremely necessary to calibrate its results against real data. The main purpose of this study was to calibrate the mixing model implemented in PCSLC using real data from Newcrest Ridgeway Gold Mine to provide guidelines for SLC project to forecast grade reliably. The methodology used was to collect historical information provided by Ridgeway to reproduce its design and result in PCSLC and then be capable to understand the complexity of gravity flow in SLC. Key information for this purpose was the utilization of the trial marker scale experiments applied at the mine, since it creates the concept of material recovery curve per level. This was fundamental to create a PCSLC model and be able to replicate the tonnage extracted and the grades reported at the mine. One of the main results in this thesis is the understanding of the gravity flow in SLC method and the demonstration of the benefit to use a recovery curve per level as a main driver for mixing modeling. The calibration of the mixing model in PCSLC was successful and the most important part is the guideline created to use in PCSLC to get reliable results in the prediction of grades and dilution for production scheduling purposes.
APA, Harvard, Vancouver, ISO, and other styles
24

Aurell, Alexander. "The SVI implied volatility model and its calibration." Thesis, KTH, Matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150575.

Full text
Abstract:
The SVI implied volatility model is a parametric model for stochastic implied volatility.The SVI is interesting because of the possibility to state explicit conditions on its parameters so that the model does not generate prices where static arbitrage opportunities can occur. Calibration of the SVI model to real market data requires non-linear optimization algorithms and can be quite time consuming. In recent years, methods to calibrate the SVI model that use its inherent structure to reduce the dimensions of the optimization problem have been invented in order to speed up the calibration. The ?first aim of this thesis is to justify the use of the model and the no static arbitrage conditions from a theoretic point of view. Important theorems by Kellerer and Lee and their proofs are discussed in detail and the conditions are carefully derived. The second aim is to implement the model so that it can be calibrated to real market implied volatility data. A calibration method is presented and the outcome of two numerical experiments validate it. The performance of the calibration method introduced in this thesis is measured in how big a fraction of the total market volume the method manages to ?t within the market spread. Tests show that the model manages to ?t most of the market volume inside the spread, even for options with short time to maturity. Further tests show that the model is capable to recalibrate an SVI parameter set that allows for static arbitrage opportunities into an SVI parameter set that does not.
SVI-modellen är en parametrisk modell för stokastisk implicit volatilitet. Modellen är intressant då det har visat sig möjligt att ställa upp villkor på dess parametrar så att priser den genererar för köp- och säljoptioner är fria från statiskt arbitrage. För att kalibrera SVI-modellen till marknadsdata krävs olinjär optimering, vilket i en implementering kan vara tidskrävande. På senare tid har kalibreringsmetoder som använder den inneboende strukturen i SVI-modellens parametrisering för att reducera dimensionen på optimeringen tagits fram.  Den här uppsatsen har två syften. Det första är att berättiga SVI-modellen och villkoren för eliminering av statiskt arbitrage. Detta görs med en genomgång av den underliggande teorin. Viktiga satser av Kellerer och Lee presenteras, bevisas och diskuteras. Det andra syftet är att konstruera en kalibreringsmetod som möjliggör anpassning av SVI-modellen till marknadsdata och implementera denna. Utfallet av två numeriska experiment validerar kalibreringsmetoden. Kalibreringsmetodens prestanda mäts i hur stor del av marknadsvolymen som SVI-modellen lyckas anpassa inom spridningen av priser på marknaden. Tester visar på att kalibreringsmetoden lyckas anpassa den största delen av priserna innanför spridningen. Vidare tester visar att kalibreringsmetoden klarar av att omkalibrera SVI-modellen så att en parametermängd som till en början ger statisk arbitrage omvärderas till en parametermängd som är fri från statiskt arbitrage.
APA, Harvard, Vancouver, ISO, and other styles
25

PALMIERI, VIVIAN. "CALIBRATION OF QUAL2E MODEL FOR CORUMBATAÍ RIVER (SP)." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4408@1.

Full text
Abstract:
A previsão dos efeitos poluidores sobre uma bacia hidrográfica é uma constante necessidade para os órgãos de gestão ambiental e para os tomadores de decisão neste âmbito. Dentro deste contexto, os modelos matemáticos que permitam prever danos ou melhorias na qualidade de água de bacias hidrográficas são importantes ferramentas para este fim. O Qual2E desponta como uma destas ferramentas, de acesso livre, aplicável à sistemas fluviais, unidimensionais, bem misturados e de fluxo constante. Este trabalho propõe uma aplicação do Qual2E para o rio Corumbataí, localizado no interior do Estado de São Paulo, visando a obtenção de uma curva representativa para a qualidade de água do rio. Para tal, foi utilizada a metodologia sugerida pelo modelo, que prevê a divisão do sistema em trechos de características hidráulicas semelhantes e a subdivisão desses trechos em elementos de mesma extensão - os elementos computacionais (ECs). Os dados de vazão, profundidade, concentração de OD (oxigênio dissolvido), DBO (demanda bioquímica de oxigênio), temperatura, carga e localização de fontes poluidoras foram obtidos através de um projeto de parceria da CETESB com a USP, que contempla a elaboração de um banco de dados para qualidade de água da bacia. Este banco, georreferenciado, foi utilizado tanto para a entrada de condições iniciais no modelo,como para a construção da curva de referência dos dados reais de campo, para comparação com a curva calculada pelo modelo. Os coeficientes de reação - resultantes da interação entre as variáveis OD e DBO - presentes no sistema de equações diferenciais resolvidas pelo modelo, foram estimados através de tentativa e erro, até a concordância com os dados observados. A calibração efetuada se mostrou eficiente em reproduzir os dados observados, validando os parâmetros estimados através de um outro conjunto de dados.Uma análise de sensibilidade foi realizada para os parâmetros k1,k2,k3 e k4 e a curva calculada foi mais sensível ao coeficiente de decaimento de DBO. As limitações inerentes ao modelo, à coleta de dados e ao tratamento estatístico dos dados disponíveis impediram uma melhor concordância da curva calculada com a observada.
The forecast of the polluting effects over a hydric basin is a constant need for the official environmental managers and for the people responsible for the decisions in this subject. Therefore, mathematical models that allow forecast damages or improvements in the river basin water quality are important tools for this purpose. Qual2E appears as one of these tools, of free access, and suitable for one-dimensional well-mixed rivers with constant flow. The objective of this work is to apply the Qual2E model to Corumbataí River, located in São Paulo State, in order to obtain a representative curve of the river water quality. The method suggested by the model request the division of the river in reaches with similar hydric features and a subdivision of these reaches in computational elements (CEs) with the same extension. The flow data, depth, DO (Dissolved Oxygen) concentration, BOD (Biochemical Oxygen Demand), temperature, load and location of pollution sources were obtained from a joint project between CETESB and USP, that resulted in a database for the water quality of the basin. This geo-referenced database was used both for the initial conditions to the model, as for the determination of a reference curve with the real field data, for comparison with the calculated curve of the model. The reaction coefficients - resulting from the interaction between DO and BOD - are the constants of the differential equations system solved by the model, were estimated through trial and error, until the agreement with the observed data. The calibration was efficient to reproduce the field data, through the validation for another group of data. A sensitivity analysis was executed for the parameters k1, k2, k3 and k4 and the calculated curve seemed to be more sensible to the BOD decaying coefficient. The limitations inherent to the model, the data collecting and the statistical treatment of the available data did not allow a better agreement between the calculated and the observed curves.
APA, Harvard, Vancouver, ISO, and other styles
26

Mbongo, Nkounga Jeffrey Ted Johnattan. "Building Interest Rate Curves and SABR Model Calibration." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96965.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University
ENGLISH ABSTRACT : In this thesis, we first review the traditional pre-credit crunch approach that considers a single curve to consistently price all instruments. We review the theoretical pricing framework and introduce pricing formulas for plain vanilla interest rate derivatives. We then review the curve construction methodologies (bootstrapping and global methods) to build an interest rate curve using the instruments described previously as inputs. Second, we extend this work in the modern post-credit framework. Third, we review the calibration of the SABR model. Finally we present applications that use interest rate curves and SABR model: stripping implied volatilities, transforming the market observed smile (given quotes for standard tenors) to non-standard tenors (or inversely) and calibrating the market volatility smile coherently with the new market evidences.
AFRIKAANSE OPSOMMING : Geen Afrikaanse opsomming geskikbaar nie
APA, Harvard, Vancouver, ISO, and other styles
27

Fonseca, Aaron James. "State-Space Randles Cell Model for Instrument Calibration." Thesis, North Dakota State University, 2020. https://hdl.handle.net/10365/31790.

Full text
Abstract:
It is desirable to calibrate electrochemical impedance spectroscopy (EIS) instrumentation using a Randles circuit. This presents a challenge as realistic loads, simulated by this circuit, contain theoretical components (Warburg elements) that are difficult to model. This thesis proposes a state-space solution to this problem and explores the process of realizing a digital high-accuracy approximation of a Randles circuit for the purposes of verifying and calibrating EIS instrumentation. Using Valsa, Dvo{\v r}{\'a}k, and Friedl's network approximation of a Warburg element, a collection of state-space relations describing the impedance of a Randles circuit are derived. From these equations the process of realizing a digital system is explored; this includes a discussion on methods of discretization, an overview of the challenges of realizing digital filters, and an analysis of the effects that finite word-length has on the accuracy of the model when using fixed-point hardware.
APA, Harvard, Vancouver, ISO, and other styles
28

Vaidyanathan, Sivaranjani. "Bayesian Models for Computer Model Calibration and Prediction." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1435527468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ramalingam, Srikumar. "Generic imaging model : calibration and 3D reconstruction algorithms." Grenoble INPG, 2006. http://www.theses.fr/2006INPG0155.

Full text
Abstract:
Des applications en vision artificielle exploitent des caméras très variées: fish-eye, systèmes catadioptriques et multi-caméras, etc. Ces caméras ont des caractéristiques intéressantes, dont surtout un champ de vue étendu. Le calibrage et la reconstruction 3D sont deux problèmes fondamentaux en vision artificielle. Les modèles et algorithmes pour ces problèmes sont habituellement de nature paramétrique, s'appliquent à un seul type de caméra et sont rarement capables de gérer des réseaux de caméras hétérogènes. Afin de résoudre ces problèmes, nous introduisons un modèle de formation d'image générique, dans lequel toute caméra est modélisée par l'ensemble de ses pixels et l'ensemble de lignes de vue associées. Nous proposons des méthodes de calibrage générique pour ce modèle, qui calculent toutes ces lignes de vue et permettent de calibrer toute caméra avec la même approche. Nous proposons également des algorithmes génériques pour la reconstruction 3D et l'auto-calibrage
Vision applications have been using cameras which are beyond pinhole: stereo, fisheye cameras, catadioptric systems, multi-camera setups etc. These novel cameras have interesting properties, especially a large field of view. Camera calibration and 3D reconstruction algorithms are fundamental blocks for computer vision. Models and algorithms for these two problems are usually parametric, camera dependent and seldom capable of handling heterogeneous camera networks, that are useful for complementary advantages. To solve these problems a generic imaging model is introduced, where every camera is modeled as a set of pixels and their associated projection rays. We propose generic methods for calibrating this model, Le. For computing ail these projection rays. These are thus able to calibrate whaetever camera using the same approach. We also propose generic algorithms for structure-from-motion (3D reconstruction, motion and pose estimation, bundle adjustment) and self-calibration
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Zongyan. "Model structure selection in powertrain calibration and control." Thesis, University of Liverpool, 2012. http://livrepository.liverpool.ac.uk/12013/.

Full text
Abstract:
This thesis develops and investigates the application of novel identification and structure identification techniques for I.C. engine systems. The legislated demand for reduced vehicle fuel consumption and emissions indicates that improved model-based dynamical engine calibration and control methods are required in place of the existing static set-point based mapping methods currently used in industry. The choice of structure of any dynamical engine model has significant consequences for the accuracy and the calibration/optimization time of engine systems. This thesis primarily addresses the issue of this structure selection. Linear models are well understood and relatively easy to implement however the modern I.C. engine is a highly nonlinear system which restricts the use of linear structures. Further the newer technologies required to achieve demanding fuel consumption and emission targets are increasingly more complex and nonlinear. The selection of appropriate nonlinear model regressor terms presents a combinatorial explosion problem which must be solved for accurate engine system modelling. In this thesis, two systematic nonlinear model structure selection techniques, namely stepwise regression with F-statistics and orthogonal least squares method with error reduction ratio, are accordingly investigated. SISO algebraic NARMAX engine models are then established in simulation studies with these methods and demonstrate the effectiveness of the approach. The thesis also investigates the development and application of multi-modelling techniques and the expansion of the model structure selection techniques to the identification of the local models terms within the multi-model structures for the engine. Based on the en- gine operating regions, novel multi-model networks can be established and several alternative multi-modelling techniques, such as LOLIMOT, Neural Network, Gaussian and log-sigmoid function weighted multi-models, for the multi-model engine system identification are explored and compared. An experimental validation of the methods is given by a black box identification of SISO engine models which are developed purely from the experimental engine test data sets. The results demonstrate that the multi-model structure selection techniques can be successfully applied on the engine systems, and that the multi-modelling techniques give good model accuracy and that good modelling efficiency can also be achieved. The outcome is a set of techniques for the efficient development of accurate nonlinear black-box models which can be acquired from experimental dynamometer test-bed data which should assist in the dynamic control of future advanced technology engine systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Turley, Carole. "Calibration Procedure for a Microscopic Traffic Simulation Model." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1747.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Riga, Candia. "The Libor Market Model: from theory to calibration." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/2288/.

Full text
Abstract:
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
APA, Harvard, Vancouver, ISO, and other styles
33

Pratap, Kadam Poonam. "Radiometric Calibration of a Hybrid RCWT Imaging Model." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/339045.

Full text
Abstract:
The applications of low-light imaging are widespread in areas such as biomedical imaging, remote sensing, ratiometric imaging, lithography, etc. The goal of this work is to develop a radiometrically scaled hybrid RCWT calculator to count the photons detected for such applications. The rigorous computation of different imaging models are discussed. An approach to calibrate the radiometry of the hybrid RCWT model for partially coherent illumination is presented. The diffraction from the object is evaluated rigorously using the hybrid RCWT model. A test bench is set up to validate the radiometrically scaled simulations. In all the cases considered, simulation and experiment agree within a 40% difference.
APA, Harvard, Vancouver, ISO, and other styles
34

Dawkins, Christina. "New directions in applied general equilibrium model calibration." Thesis, University of Warwick, 1999. http://wrap.warwick.ac.uk/110874/.

Full text
Abstract:
This thesis develops extensions to current techniques in applied general equilibrium (AGE) model calibration that improve existing practice and expand the use of AGE modelling to economic history applications. Chapter I introduces the thesis. Chapter 2 summarizes the origin and practice of calibration in economics, focussing on its role in AGE modelling. Chapter 3 proposes two related sensitivity analysis procedures for AGE models: calibrated parameter sensitivity analysis (CPSA) and extended sensitivity analysis. Existing sensitivity techniques are incomplete because they only capture the robustness of the model's results to uncertainty in a subset of the parameters, the elasticities. The remaining parameters determine the model's static structure, but are ignored in the sensitivity literature. CPSA fills this gap. When combined with an existing elasticity sensitivity technique in 'extended sensitivity analysis,' CPSA permits sensitivity analysis with respect to uncertainty in the values of all of a model's parameters. Chapter 4 examines the significance of the data adjustments required for calibration. It proposes that the measure of this importance should be the effect of the adjustment algorithm on the statistical properties of the model results. Simulations show that the performance of various algorithms differs significantly under such criteria, and illustrate fora specific policy experiment the link between algorithm performance and the relative magnitudes of the data. The experiments imply that the choice of data adjustment procedure is an important, if neglected, component of calibration. Chapter 5 shows how AGE techniques can be adapted to explore decompositional issues in economic history. By incorporating information about the combined effect of several shocks to an economy in calibration, AGE models can quantify the relative contributions to change of each shock. Furthermore, the effects ol shocks are non-additive, so that the marginal contribution of a shock is conditional on the presence or absence of other shocks. Chapter 6 concludes.
APA, Harvard, Vancouver, ISO, and other styles
35

DeChant, Caleb Matthew. "Hydrologic Data Assimilation: State Estimation and Model Calibration." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/172.

Full text
Abstract:
This thesis is a combination of two separate studies which examine hydrologic data assimilation techniques: 1) to determine the applicability of assimilation of remotely sensed data in operational models and 2) to compare the effectiveness of assimilation and other calibration techniques. The first study examines the ability of Data Assimilation of remotely sensed microwave radiance data to improve snow water equivalent prediction, and ultimately operational streamflow forecasts. Operational streamflow forecasts in the National Weather Service River Forecast Center are produced with a coupled SNOW17 (snow model) and SACramento Soil Moisture Accounting (SAC-SMA) model. A comparison of two assimilation techniques, the Ensemble Kalman Filter (EnKF) and the Particle Filter (PF), is made using a coupled SNOW17 and the Microwave Emission Model for Layered Snowpack model to assimilate microwave radiance data. Microwave radiance data, in the form of brightness temperature (TB), is gathered from the Advanced Microwave Scanning Radiometer-Earth Observing System at the 36.5GHz channel. SWE prediction is validated in a synthetic experiment. The distribution of snowmelt from an experiment with real data is then used to run the SAC-SMA model. Several scenarios on state or joint state-parameter updating with TB data assimilation to SNOW-17 and SAC-SMA models were analyzed, and the results show potential benefit for operational streamflow forecasting. The second study compares the effectiveness of different calibration techniques in hydrologic modeling. Currently, the most commonly used methods for hydrologic model calibration are global optimization techniques. While these techniques have become very efficient and effective in optimizing the complicated parameter space of hydrologic models, the uncertainty with respect to parameters is ignored. This has led to recent research looking into Bayesian Inference through Monte Carlo methods to analyze the ability to calibrate models and represent the uncertainty in relation to the parameters. Research has recently been performed in filtering and Markov Chain Monte Carlo (MCMC) techniques for optimization of hydrologic models. At this point, a comparison of the effectiveness of global optimization, filtering and MCMC techniques has yet to be reported in the hydrologic modeling community. This study compares global optimization, MCMC, the PF, the Particle Smoother, the EnKF and the Ensemble Kalman Smoother for the purpose of parameter estimation in both the HyMod and SAC-SMA hydrologic models.
APA, Harvard, Vancouver, ISO, and other styles
36

Johnson, Nicolas R. "Building Energy Model Calibration for Retrofit Decision Making." PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/3507.

Full text
Abstract:
Accommodating the continued increase in energy demand in the face of global climate change has been a worldwide concern. With buildings in the US consuming nearly 40% of national energy, a concerted effort must be given to reduce building energy consumption. As new buildings continue to improve their efficiency through more restrictive energy codes, the other 76.9 billion square feet of current building stock falls further behind. The rate at which current buildings are being retrofit is not enough and better tools are needed to access the benefits of retrofits and the uncertainties associated with them. This study proposes a stochastic method of building energy model calibration coupled with a monthly normative building simulation addressed in ISO 13890. This approach takes advantage of the great efficiency of Latin Hypercube Sampling and the lightweight normative building simulation method, to deliver a set of calibrated solutions to assess the effectiveness of energy conservation measure, making uncertainty a part of the modeling process. A case study on a mixed-use university building is conducted to show the strength and performance of this simple method. Limitations and future concerns are also addressed.
APA, Harvard, Vancouver, ISO, and other styles
37

LIU, LEI. "AN AUTOMATIC CALIBRATION STRATEGY FOR 3D FE BRIDGE MODELS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1092674830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

MacKay, Robert Malcolm. "The GCRC two-dimensional zonally averaged statistical dynamical climate model : development, model performance, and climate sensitivity /." Full text open access at:, 1994. http://content.ohsu.edu/u?/etd,199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Esposito, Delia. "Torque Model Calibration of a Motorcycle Internal Combustion Engine." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15964/.

Full text
Abstract:
Optimizing the performance of internal combustion engines increases complexity of control strategies and makes the calibration process long and expensive. The first activity of this Thesis explains the possibility of reduce tests on the bench saving in terms of working hours and operating costs. In particular, through MATLAB codes, it’s shown how extrapolating data of spark advance min and extramin from the spark advance dynamic sweep. The tests are usually demanded by the ECU supplier, who is responsible for the final calibration, but the calibrator engineer still has the task of checking the work of the supplier. For this reason the second activity of this Thesis was to create a simplified Torque-based model that determines the behavior and response of the engine (R2 = 0.98). Cases of study are explained in details and various simulations are realized.
APA, Harvard, Vancouver, ISO, and other styles
40

Yilmaz, Busra Zeynep. "Completion, Pricing And Calibration In A Levy Market Model." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612598/index.pdf.

Full text
Abstract:
In this thesis, modelling with Lé
vy processes is considered in three parts. In the first part, the general geometric Lé
vy market model is examined in detail. As such markets are generally incomplete, it is shown that the market can be completed by enlarging with a series of new artificial assets called &ldquo
power-jump assets&rdquo
based on the power-jump processes of the underlying Lé
vy process. The second part of the thesis presents two different methods for pricing European options: the martingale pricing approach and the Fourier-based characteristic formula method which is performed via fast Fourier transform (FFT). Performance comparison of the pricing methods led to the fact that the fast Fourier transform produces very small pricing errors so the results of both methods are nearly identical. Throughout the pricing section jump sizes are assumed to have a particular distribution. The third part contributes to the empirical applications of Lé
vy processes. In this part, the stochastic volatility extension of the jump diffusion model is considered and calibration on Standard&
Poors (S&
P) 500 options data is executed for the jump-diffusion model, stochastic volatility jump-diffusion model of Bates and the Black-Scholes model. The model parameters are estimated by using an optimization algorithm. Next, the effect of additional stochastic volatility extension on explaining the implied volatility smile phenomenon is investigated and it is found that both jumps and stochastic volatility are required. Moreover, the data fitting performances of three models are compared and it is shown that stochastic volatility jump-diffusion model gives relatively better results.
APA, Harvard, Vancouver, ISO, and other styles
41

Sandin, Mats, and Magnus Fransson. "Framework for Calibration of a Traffic State Space Model." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-85342.

Full text
Abstract:
To evaluate the traffic state over time and space, several models can be used. A typical model for estimating the state of the traffic for a stretch of road or a road network is the cell transmission model, which is a form of state space model. This kind of model typically needs to be calibrated since the different roads have different properties. This thesis will present a calibration framework for the velocity based cell transmission model, the CTM-v. The cell transmission model for velocity is a discrete time dynamical system that can model the evolution of the velocity field on highways. Such a model can be fused with an ensemble Kalman filter update algorithm for the purpose of velocity data assimilation. Indeed, enabling velocity data assimilation was the purpose for ever developing the model in the first place and it is an essential part of the Mobile Millennium research project. Therefore a systematic methodology for calibrating the cell transmission is needed. This thesis presents a framework for calibration of the velocity based cell transmission model that is combined with the ensemble Kalman filter. The framework consists of two separate methods, one is a statistical approach to calibration of the fundamental diagram. The other is a black box optimization method, a simplification of the complex method that can solve inequality constrained optimization problems with non-differentiable objective functions. Both of these methods are integrated with the existing system, yielding a calibration framework, in particular highways were stationary detectors are part of the infrastructure. The output produced by the above mentioned system is highly dependent on the values of its characterising parameters. Such parameters need to be calibrated so as to make the model a valid representation of reality. Model calibration and validation is a process of its own, most often tailored for the researchers models and purposes. The combination of the two methods are tested in a suit of experiments for two separate highway models of Interstates 880 and 15, CA which are evaluated against travel time and space mean speed estimates given by Bluetooth detectors with an error between 7.4 and 13.4 % for the validation time periods depending on the parameter set and model.
APA, Harvard, Vancouver, ISO, and other styles
42

Ndiritu, John G. "An improved genetic algorithm for rainfall-runoff model calibration /." Title page, contents and abstract only, 1998. http://web4.library.adelaide.edu.au/theses/09PH/09phn337.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Blazer, Derek Jason. "Systematic method for steady-state groundwater flow model calibration." Thesis, The University of Arizona, 1999. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_etd_hy0189_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Seegmiller, Neal A. "Dynamic Model Formulation and Calibration for Wheeled Mobile Robots." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/460.

Full text
Abstract:
Advances in hardware design have made wheeled mobile robots (WMRs) exceptionally mobile. To fully exploit this mobility, WMR planning, control, and estimation systems require motion models that are fast and accurate. Much of the published theory on WMR modeling is limited to 2D or kinematics, but 3D dynamic (or force-driven) models are required when traversing challenging terrain, executing aggressive maneuvers, and manipulating heavy payloads. This thesis advances the state of the art in both the formulation and calibration of WMR models We present novel WMR model formulations that are high-fidelity, general, modular, and fast. We provide a general method to derive 3D velocity kinematics for any WMR joint configuration. Using this method, we obtain constraints on wheel ground contact point velocities for our differential algebraic equation (DAE)-based models. Our “stabilized DAE” kinematics formulation enables constrained, drift free motion prediction on rough terrain. We also enhance the kinematics to predict nonzero wheel slip in a principled way based on gravitational, inertial, and dissipative forces. Unlike ordinary differential equation (ODE)-based dynamic models which can be very stiff, our constrained dynamics formulation permits large integration steps without compromising stability. Some alternatives like Open Dynamics Engine also use constraints, but can only approximate Coulomb friction at contacts. In contrast, we can enforce realistic, nonlinear models of wheel-terrain interaction (e.g. empirical models for pneumatic tires, terramechanics-based models) using a novel force-balance optimization technique. Simulation tests show our kinematic and dynamic models to be more functional, stable, and efficient than common alternatives. Simulations run 1K-10K faster than real time on an ordinary PC, even while predicting articulated motion on rough terrain and enforcing realistic wheel-terrain interaction models. In addition, we present a novel Integrated Prediction Error Minimization (IPEM) method to calibrate model parameters that is general, convenient, online, and evaluative. Ordinarily system dynamics are calibrated by minimizing the error of instantaneous output predictions. IPEM instead forms predictions by integrating the system dynamics over an interval; benefits include reduced sensing requirements, better observability, and accuracy over a longer horizon. In addition to calibrating out systematic errors, we simultaneously calibrate a model of stochastic error propagation to quantify the uncertainty of motion predictions. Experimental results on multiple platforms and terrain types show that parameter estimates converge quickly during online calibration, and uncertainty is well characterized. Under normal conditions, our enhanced kinematic model can predict nonzero wheel slip as accurately as a full dynamic model for a fraction of the computation cost. Finally, odometry is greatly improved when using IPEM vs. manual calibration, and when using 3D vs. 2D kinematics. To facilitate their use, we have released open source MATLAB and C++ libraries implementing the model formulation and calibration methods in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
45

Mao, Jiachen. "Automatic calibration of an urban microclimate model under uncertainty." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120873.

Full text
Abstract:
Thesis: S.M. in Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 79-86).
Simulation models play an important role in the design, analysis, and optimization of modern energy and environmental systems at building or urban scale. However, due to the extreme complexity of built environments and the sheer number of interacting parameters, it is difficult to obtain an accurate representation of real-world systems. Thus, model calibration and uncertainty analysis hold a particular interest, and it is necessary to evaluate to what degree simulation models are imperfect before implementing them during the decision-making process. In contrast to the extensive literature on the calibration of building performance models, little has been reported on how to automatically calibrate physics-based urban microclimate models. This thesis illustrates a general methodology for automatic model calibration and, for the first time, applies it to an urban microclimate system. The study builds upon the previously reported and updated Urban Weather Generator (UWG) to present a deep look into an existing urban district area in downtown Abu Dhabi (UAE) during 2017. Based on 30 candidate inputs covering the meteorological factors, urban characteristics, vegetation variables, and building systems, we performed global sensitivity analysis, Monte Carlo filtering, and optimization-aided calibration on the UWG model. In particular, an online hyper-heuristic evolutionary algorithm (EA) is proposed and developed to accelerate the calibration process. The UWG is a fairly robust simulator to approximate the urban thermal behavior for dierent seasons. The validation results show that, in single-objective optimization, the online hyper-heuristics can robustly help EA produce quality solutions with smaller uncertainties at much less computational cost. Finally, the resulting calibrated solutions are able to capture weekly-average and hourly diurnal profiles of the urban outdoor air temperature similar to the measurements for certain periods of the year.
by Jiachen Mao.
S.M. in Building Technology
APA, Harvard, Vancouver, ISO, and other styles
46

Sahni, Abhishek. "Energy –Efficient Solar Model Improvement Using Motor Calibration Preference." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187146.

Full text
Abstract:
The amount of force or power when applied can move one object from one position to another or the capacity of a system to do work is called energy. It exists in everybody whether they are human beings or animals or non-living things. There are many forms of energy such as: kinetic, potential, light, sound, gravitational, elastic, electromagnetic or nuclear. According to the law of conservation of energy, any form of energy can be converted into another form and the total energy will remain the same. Energy can be broadly classified into two main groups’ i.e. renewable and nonrenewable resources. Many of the renewable energy a technology have been around for years, and as the time go by, are increasing in efficiency. Keywords: solar panel improvement, motor control, energy –efficient Solarplatten, Motosteuerung, Energieeffizenz
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Yousheng. "Model calibration methods for mechanical systems with local nonlinearities." Doctoral thesis, Linnéuniversitetet, Institutionen för maskinteknik (MT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-57638.

Full text
Abstract:
Most modern product development utilizes computational models. With increasing demands on reducing the product development lead-time, it becomes more important to improve the accuracy and efficiency of simulations. In addition, to improve product performance, a lot of products are designed to be lighter and more flexible, thus more prone to nonlinear behaviour. Linear finite element (FE) models, which still form the basis of numerical models used to represent mechanical structures, may not be able to predict structural behaviour with necessary accuracy when nonlinear effects are significant. Nonlinearities are often localized to joints or boundary conditions. Including nonlinear behaviour in FE-models introduces more sources of uncertainty and it is often necessary to calibrate the models with the use of experimental data. This research work presents a model calibration method that is suitable for mechanical systems with structural nonlinearities. The methodology concerns pre-test planning, parameterization, simulation methods, vibrational testing and optimization. The selection of parameters for the calibration requires physical insights together with analyses of the structure; the latter can be achieved by use of simulations. Traditional simulation methods may be computationally expensive when dealing with nonlinear systems; therefore an efficient fixed-step state-space based simulation method was developed. To gain knowledge of the accuracy of different simulation methods, the bias errors for the proposed method as well as other widespread simulation methods were studied and compared. The proposed method performs well in comparison to other simulation methods. To obtain precise estimates of the parameters, the test data should be informative of the parameters chosen and the parameters should be identifiable. Test data informativeness and parameter identifiability are coupled and they can be assessed by the Fisher information matrix (FIM). To optimize the informativeness of test data, a FIM based pre-test planning method was developed and a multi-sinusoidal excitation was designed. The steady-state responses at the side harmonics were shown to contain valuable information for model calibration of FE-models representing mechanical systems with structural nonlinearities. In this work, model calibration was made by minimizing the difference between predicted and measured multi-harmonic frequency response functions using an efficient optimization routine. The steady-state responses were calculated using the extended multi-harmonic balance method. When the parameters were calibrated, a k-fold cross validation was used to obtain parameter uncertainty. The proposed model calibration method was validated using two test-rigs, one with a geometrical nonlinearity and one with a clearance type of nonlinearity. To attain high quality data efficiently, the amplitude of the forcing harmonics was controlled at each frequency step by an off-line force feedback algorithm. The applied force was then measured and used in the numerical simulations of the responses. It was shown in the validation results that the predictions from the calibrated models agree well with the experimental results. In summary, the presented methodology concerns both theoretical and experimental aspects as it includes methods for pre-test planning, simulations, testing, calibration and validation. As such, this research work offers a complete framework and contributes to more effective and efficient analyses on mechanical systems with structural nonlinearities.
APA, Harvard, Vancouver, ISO, and other styles
48

Au, Chi Kwong. "Instant calibration to the stochastic volatility LIBOR market model /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?MATH%202008%20AU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ali, Rheeda. "Calibration of a personalised model of left atrial electrophysiology." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/48053.

Full text
Abstract:
Patient-specific computer models of the human atria have the potential to aid clinical intervention in the treatment of cardiac arrhythmias if suitably validated. An anatomically accurate, patient-specific map of the electrical conductivity of the left atrial tissue is obtained using data from delayed-enhancement magnetic resonance imaging (MRI), and validated against clinical electroanatomic mapping measurements. The patient-specific intensity maps from two accepted image segmentation and registration techniques are evaluated and compared, and the approach is shown to be highly sensitive to the technique used. The segmentation technique and direction of the maximum intensity projection are both critical in interpreting regions of fibrosis from the patient specific intensity maps. The clinical data suggests a linear relationship between the intensity and the local conduction velocity, which is incorporated into the atrial model through calibration of the conductivity tensor. Simulations of the resulting atrial models produce activation patterns that strongly correlate with clinical recordings. A novel semi-automated algorithm is used for landmark selection for image registration for spatially comparing the electroanatomical and MRI based images. The thesis concludes with a discussion of using the resulting computational model to interrogate the underlying structural and functional substrates of patients with atrial fibrillation.
APA, Harvard, Vancouver, ISO, and other styles
50

Kuenzi, Maribeth. "AN INTEGRATED MODEL OF WORK CLIMATE." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2530.

Full text
Abstract:
Management scholars have become increasingly interested in the role of organizational context. As part of this trend, research on work climates has thrived. This contemporary climate research differs from traditional approaches by concentrating on facet-specific climate types like service or innovation, rather than general, global conceptualizations of climate. Consequently, the climate literature has become fragmented and disorderly. I seek to remedy this in my dissertation. Specifically, I propose and test an integrated model of work climate that examines both molar and facet-specific climates. Chapter 1 is a review of the organizational work climate literature. This review seeks to review, reorganize, and reintegrate the climate literature. In addition, this review brought to light an issue that hinders the integration of the climate literatures: the literature does not contain a quality instrument for assessing the general characteristics of the molar work climate of an organization. In Chapter 2, I develop a theoretically-driven measure of work climate by drawing on the competing values framework (Quinn & Rohrbaugh, 1983). Preliminary results from three studies suggest that the proposed four-component model of molar work climate appears to be viable. The results indicate the instrument has internal reliability. Further, the results demonstrate discriminant, convergent, and criterion-related validity. In Chapter 3, I propose and test an integrated model of work climate by drawing on bandwidth-fidelity theory (Cronbach & Gleser, 1957). I predict that facet-specific climates will be more strongly related to specific outcomes and molar climates will be more strongly related to global outcomes. Further, I suggest weaker, indirect relationships between molar climate and specific outcomes and between facet-specific climates and global outcomes. The results indicate support for my predictions.
Ph.D.
Department of Management
Business Administration
Business Administration PhD
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography