Academic literature on the topic 'Regression Monte-Carlo scheme'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Regression Monte-Carlo scheme.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Regression Monte-Carlo scheme":

1

Izydorczyk, Lucas, Nadia Oudjane, and Francesco Russo. "A fully backward representation of semilinear PDEs applied to the control of thermostatic loads in power systems." Monte Carlo Methods and Applications 27, no. 4 (October 21, 2021): 347–71. http://dx.doi.org/10.1515/mcma-2021-2095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract We propose a fully backward representation of semilinear PDEs with application to stochastic control. Based on this, we develop a fully backward Monte-Carlo scheme allowing to generate the regression grid, backwardly in time, as the value function is computed. This offers two key advantages in terms of computational efficiency and memory. First, the grid is generated adaptively in the areas of interest, and second, there is no need to store the entire grid. The performances of this technique are compared in simulations to the traditional Monte-Carlo forward-backward approach on a control problem of thermostatic loads.
2

Folashade Adeola Bolarinwa, Olusola Samuel Makinde, and Olusoga Akin Fasoranbaku. "A new Bayesian ridge estimator for logistic regression in the presence of multicollinearity." World Journal of Advanced Research and Reviews 20, no. 3 (December 30, 2023): 458–65. http://dx.doi.org/10.30574/wjarr.2023.20.3.2415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research introduces the Bayesian schemes for estimating logistic regression parameters in the presence of multicollinearity. The Bayesian schemes involve the introduction of a prior together with the likelihood which resulted in the posterior distribution that is not tractable, hence the use of a numerical method i.e Gibbs sampler. Different levels of multicollinearity were chosen to be p = 0.80‚0.85‚0.90‚0.95‚0.99and 0.999to accommodate severe, very severe and nearly perfect state of multicollinearity with sample sizes taken as 10,20,30,50,100,200,300 and 500.Different ridge parameters k were introduced to remedy the effect of multicollinearity .The explanatory variables used were 3 and 7. Model estimation was carried out using Bayesian approach via the Gibbs sampler of Markov Chain Monte Carlo Simulation. The means square error MSE of Bayesian logistic regression estimation was compared with the frequentist methods of the estimation. The result shows a minimum mean square error with the Bayesian scheme compared to the frequentist method.
3

Gobet, E., J. G. López-Salas, P. Turkedjiev, and C. Vázquez. "Stratified Regression Monte-Carlo Scheme for Semilinear PDEs and BSDEs with Large Scale Parallelization on GPUs." SIAM Journal on Scientific Computing 38, no. 6 (January 2016): C652—C677. http://dx.doi.org/10.1137/16m106371x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Trinchero, Riccardo, and Flavio Canavero. "Use of an Active Learning Strategy Based on Gaussian Process Regression for the Uncertainty Quantification of Electronic Devices." Engineering Proceedings 3, no. 1 (October 30, 2020): 3. http://dx.doi.org/10.3390/iec2020-06967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents a preliminary version of an active learning (AL) scheme for the sample selection aimed at the development of a surrogate model for the uncertainty quantification based on the Gaussian process regression. The proposed AL strategy iteratively searches for new candidate points to be included within the training set by trying to minimize the relative posterior standard deviation provided by the Gaussian process regression surrogate. The above scheme has been applied for the construction of a surrogate model for the statistical analysis of the efficiency of a switching buck converter as a function of seven uncertain parameters. The performance of the surrogate model constructed via the proposed active learning method is compared with that provided by an equivalent model built via a Latin hypercube sampling. The results of a Monte Carlo simulation with the computational model are used as reference.
5

Gobet, Emmanuel, José Germán López-Salas, and Carlos Vázquez. "Quasi-Regression Monte-Carlo Scheme for Semi-Linear PDEs and BSDEs with Large Scale Parallelization on GPUs." Archives of Computational Methods in Engineering 27, no. 3 (April 4, 2019): 889–921. http://dx.doi.org/10.1007/s11831-019-09335-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khan, Sajid Ali, Sayyad Khurshid, Shabnam Arshad, and Owais Mushtaq. "Bias Estimation of Linear Regression Model with Autoregressive Scheme using Simulation Study." Journal of Mathematical Analysis and Modeling 2, no. 1 (March 29, 2021): 26–39. http://dx.doi.org/10.48185/jmam.v2i1.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In regression modeling, first-order auto correlated errors are often a problem, when the data also suffers from independent variables. Generalized Least Squares (GLS) estimation is no longer the best alternative to Ordinary Least Squares (OLS). The Monte Carlo simulation illustrates that regression estimation using data transformed according to the GLS method provides estimates of the regression coefficients which are superior to OLS estimates. In GLS, we observe that in sample size $200$ and $\sigma$=3 with correlation level $0.90$ the bias of GLS $\beta_0$ is $-0.1737$, which is less than all bias estimates, and in sample size $200$ and $\sigma=1$ with correlation level $0.90$ the bias of GLS $\beta_0$ is $8.6802$, which is maximum in all levels. Similarly minimum and maximum bias values of OLS and GLS of $\beta_1$ are $-0.0816$, $-7.6101$ and $0.1371$, $0.1383$ respectively. The average values of parameters of the OLS and GLS estimation with different size of sample and correlation levels are estimated. It is found that for large samples both methods give similar results but for small sample size GLS is best fitted as compared to OLS.
7

Wang, Han, Lingwei Xu, and Xianpeng Wang. "Outage Probability Performance Prediction for Mobile Cooperative Communication Networks Based on Artificial Neural Network." Sensors 19, no. 21 (November 4, 2019): 4789. http://dx.doi.org/10.3390/s19214789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper investigates outage probability (OP) performance predictions using transmit antenna selection (TAS) and derives exact closed-form OP expressions for a TAS scheme. It uses Monte-Carlo simulations to evaluate OP performance and verify the analysis. A back-propagation (BP) neural network-based OP performance prediction algorithm is proposed and compared with extreme learning machine (ELM), locally weighted linear regression (LWLR), support vector machine (SVM), and BP neural network methods. The proposed method was found to have higher OP performance prediction results than the other prediction methods.
8

Seo, Jung-In, Young Eun Jeon, and Suk-Bok Kang. "New Approach for a Weibull Distribution under the Progressive Type-II Censoring Scheme." Mathematics 8, no. 10 (October 5, 2020): 1713. http://dx.doi.org/10.3390/math8101713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper proposes a new approach based on the regression framework employing a pivotal quantity to estimate unknown parameters of a Weibull distribution under the progressive Type-II censoring scheme, which provides a closed form solution for the shape parameter, unlike its maximum likelihood estimator counterpart. To resolve serious rounding errors for the exact mean and variance of the pivotal quantity, two different types of Taylor series expansion are applied, and the resulting performance is enhanced in terms of the mean square error and bias obtained through the Monte Carlo simulation. Finally, an actual application example, including a simple goodness-of-fit analysis of the actual test data based on the pivotal quantity, proves the feasibility and applicability of the proposed approach.
9

MORALES, MARÍA, CARMELO RODRÍGUEZ, and ANTONIO SALMERÓN. "SELECTIVE NAIVE BAYES FOR REGRESSION BASED ON MIXTURES OF TRUNCATED EXPONENTIALS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 15, no. 06 (December 2007): 697–716. http://dx.doi.org/10.1142/s0218488507004959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Naive Bayes models have been successfully used in classification problems where the class variable is discrete. These models have also been applied to regression or prediction problems, i.e. classification problems where the class variable is continuous, but usually under the assumption that the joint distribution of the feature variables and the class is multivariate Gaussian. In this paper we are interested in regression problems where some of the feature variables are discrete while the others are continuous. We propose a Naive Bayes predictor based on the approximation of the joint distribution by a Mixture of Truncated Exponentials (MTE). We have followed a filter-wrapper procedure for selecting the variables to be used in the construction of the model. This scheme is based on the mutual information between each of the candidate variables and the class. Since the mutual information can not be computed exactly for the MTE distribution, we introduce an unbiased estimator of it, based on Monte Carlo methods. We test the performance of the proposed model in artificial and real-world datasets.
10

Ma, Zhi-Sai, Li Liu, Si-Da Zhou, and Lei Yu. "Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach." Shock and Vibration 2017 (2017): 1–14. http://dx.doi.org/10.1155/2017/8176593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA). A kernel ridge regression functional series TARMA (FS-TARMA) recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

Dissertations / Theses on the topic "Regression Monte-Carlo scheme":

1

Min, Ming. "Numerical Methods for European Option Pricing with BSDEs." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-theses/1169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper aims to calculate the all-inclusive European option price based on XVA model numerically. For European type options, the XVA can be calculated as so- lution of a BSDE with a specific driver function. We use the FT scheme to find a linear approximation of the nonlinear BSDE and then use linear regression Monte Carlo method to calculate the option price.
2

Izydorczyk, Lucas. "Probabilistic backward McKean numerical methods for PDEs and one application to energy management." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAE008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse s'intéresse aux équations différentielles stochastiques de type McKean(EDS) et à leur utilisation pour représenter des équations aux dérivées partielles (EDP) non linéaires. Ces équations ne dépendent pas seulement du temps et de la position d'une certaine particule mais également de sa loi. En particulier nous traitons le cas inhabituel de la représentation d'EDP de type Fokker-Planck avec condition terminale fixée. Nous discutons existence et unicité pour ces EDP et de leur représentation sous la forme d'une EDS de type McKean, dont l'unique solutioncorrespond à la dynamique du retourné dans le temps d'un processus de diffusion.Nous introduisons la notion de représentation complètement non-linéaire d'une EDP semilinéaire. Celle-ci consiste dans le couplage d'une EDS rétrograde et d'un processus solution d'une EDS évoluant de manière rétrograde dans le temps. Nous discutons également une application à la représentation d'une équation d'Hamilton-Jacobi-Bellman (HJB) en contrôle stochastique. Sur cette base, nous proposonsun algorithme de Monte-Carlo pour résoudre des problèmes de contrôle. Celui ciest avantageux en termes d'efficience calculatoire et de mémoire, en comparaisonavec les approches traditionnelles progressive rétrograde. Nous appliquons cette méthode dans le contexte de la gestion de la demande dans les réseaux électriques. Pour finir, nous faisons le point sur l'utilisation d'EDS de type McKean généralisées pour représenter des EDP non-linéaires et non-conservatives plus générales que Fokker-Planck
This thesis concerns McKean Stochastic Differential Equations (SDEs) to representpossibly non-linear Partial Differential Equations (PDEs). Those depend not onlyon the time and position of a given particle, but also on its probability law. In particular, we treat the unusual case of Fokker-Planck type PDEs with prescribed final data. We discuss existence and uniqueness for those equations and provide a probabilistic representation in the form of McKean type equation, whose unique solution corresponds to the time-reversal dynamics of a diffusion process.We introduce the notion of fully backward representation of a semilinear PDE: thatconsists in fact in the coupling of a classical Backward SDE with an underlying processevolving backwardly in time. We also discuss an application to the representationof Hamilton-Jacobi-Bellman Equation (HJB) in stochastic control. Based on this, we propose a Monte-Carlo algorithm to solve some control problems which has advantages in terms of computational efficiency and memory whencompared to traditional forward-backward approaches. We apply this method in the context of demand side management problems occurring in power systems. Finally, we survey the use of generalized McKean SDEs to represent non-linear and non-conservative extensions of Fokker-Planck type PDEs

Books on the topic "Regression Monte-Carlo scheme":

1

Sobczyk, Eugeniusz Jacek. Uciążliwość eksploatacji złóż węgla kamiennego wynikająca z warunków geologicznych i górniczych. Instytut Gospodarki Surowcami Mineralnymi i Energią PAN, 2022. http://dx.doi.org/10.33223/onermin/0222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hard coal mining is characterised by features that pose numerous challenges to its current operations and cause strategic and operational problems in planning its development. The most important of these include the high capital intensity of mining investment projects and the dynamically changing environment in which the sector operates, while the long-term role of the sector is dependent on factors originating at both national and international level. At the same time, the conditions for coal mining are deteriorating, the resources more readily available in active mines are being exhausted, mining depths are increasing, temperature levels in pits are rising, transport routes for staff and materials are getting longer, effective working time is decreasing, natural hazards are increasing, and seams with an increasing content of waste rock are being mined. The mining industry is currently in a very difficult situation, both in technical (mining) and economic terms. It cannot be ignored, however, that the difficult financial situation of Polish mining companies is largely exacerbated by their high operating costs. The cost of obtaining coal and its price are two key elements that determine the level of efficiency of Polish mines. This situation could be improved by streamlining the planning processes. This would involve striving for production planning that is as predictable as possible and, on the other hand, economically efficient. In this respect, it is helpful to plan the production from operating longwalls with full awareness of the complexity of geological and mining conditions and the resulting economic consequences. The constraints on increasing the efficiency of the mining process are due to the technical potential of the mining process, organisational factors and, above all, geological and mining conditions. The main objective of the monograph is to identify relations between geological and mining parameters and the level of longwall mining costs, and their daily output. In view of the above, it was assumed that it was possible to present the relationship between the costs of longwall mining and the daily coal output from a longwall as a function of onerous geological and mining factors. The monograph presents two models of onerous geological and mining conditions, including natural hazards, deposit (seam) parameters, mining (technical) parameters and environmental factors. The models were used to calculate two onerousness indicators, Wue and WUt, which synthetically define the level of impact of onerous geological and mining conditions on the mining process in relation to: —— operating costs at longwall faces – indicator WUe, —— daily longwall mining output – indicator WUt. In the next research step, the analysis of direct relationships of selected geological and mining factors with longwall costs and the mining output level was conducted. For this purpose, two statistical models were built for the following dependent variables: unit operating cost (Model 1) and daily longwall mining output (Model 2). The models served two additional sub-objectives: interpretation of the influence of independent variables on dependent variables and point forecasting. The models were also used for forecasting purposes. Statistical models were built on the basis of historical production results of selected seven Polish mines. On the basis of variability of geological and mining conditions at 120 longwalls, the influence of individual parameters on longwall mining between 2010 and 2019 was determined. The identified relationships made it possible to formulate numerical forecast of unit production cost and daily longwall mining output in relation to the level of expected onerousness. The projection period was assumed to be 2020–2030. On this basis, an opinion was formulated on the forecast of the expected unit production costs and the output of the 259 longwalls planned to be mined at these mines. A procedure scheme was developed using the following methods: 1) Analytic Hierarchy Process (AHP) – mathematical multi-criteria decision-making method, 2) comparative multivariate analysis, 3) regression analysis, 4) Monte Carlo simulation. The utilitarian purpose of the monograph is to provide the research community with the concept of building models that can be used to solve real decision-making problems during longwall planning in hard coal mines. The layout of the monograph, consisting of an introduction, eight main sections and a conclusion, follows the objectives set out above. Section One presents the methodology used to assess the impact of onerous geological and mining conditions on the mining process. Multi-Criteria Decision Analysis (MCDA) is reviewed and basic definitions used in the following part of the paper are introduced. The section includes a description of AHP which was used in the presented analysis. Individual factors resulting from natural hazards, from the geological structure of the deposit (seam), from limitations caused by technical requirements, from the impact of mining on the environment, which affect the mining process, are described exhaustively in Section Two. Sections Three and Four present the construction of two hierarchical models of geological and mining conditions onerousness: the first in the context of extraction costs and the second in relation to daily longwall mining. The procedure for valuing the importance of their components by a group of experts (pairwise comparison of criteria and sub-criteria on the basis of Saaty’s 9-point comparison scale) is presented. The AHP method is very sensitive to even small changes in the value of the comparison matrix. In order to determine the stability of the valuation of both onerousness models, a sensitivity analysis was carried out, which is described in detail in Section Five. Section Six is devoted to the issue of constructing aggregate indices, WUe and WUt, which synthetically measure the impact of onerous geological and mining conditions on the mining process in individual longwalls and allow for a linear ordering of longwalls according to increasing levels of onerousness. Section Seven opens the research part of the work, which analyses the results of the developed models and indicators in individual mines. A detailed analysis is presented of the assessment of the impact of onerous mining conditions on mining costs in selected seams of the analysed mines, and in the case of the impact of onerous mining on daily longwall mining output, the variability of this process in individual fields (lots) of the mines is characterised. Section Eight presents the regression equations for the dependence of the costs and level of extraction on the aggregated onerousness indicators, WUe and WUt. The regression models f(KJC_N) and f(W) developed in this way are used to forecast the unit mining costs and daily output of the designed longwalls in the context of diversified geological and mining conditions. The use of regression models is of great practical importance. It makes it possible to approximate unit costs and daily output for newly designed longwall workings. The use of this knowledge may significantly improve the quality of planning processes and the effectiveness of the mining process.

Book chapters on the topic "Regression Monte-Carlo scheme":

1

Habyarimana, Ephrem, and Sofia Michailidou. "Genomic Prediction and Selection in Support of Sorghum Value Chains." In Big Data in Bioeconomy, 207–18. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71069-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractGenomic prediction and selection models (GS) were deployed as part of DataBio project infrastructure and solutions. The work addressed end-user requirements, i.e., the need for cost-effectiveness of the implemented technologies, simplified breeding schemes, and shortening the time to cultivar development by selecting for genetic merit. Our solutions applied genomic modelling in order to sustainably improve productivity and profits. GS models were implemented in sorghum crop for several breeding scenarios. We fitted the best linear unbiased predictions data using Bayesian ridge regression, genomic best linear unbiased predictions, Bayesian least absolute shrinkage and selection operator, and BayesB algorithms. The performance of the models was evaluated using Monte Carlo cross-validation with 70% and 30%, respectively, as training and validation sets. Our results show that genomic models perform comparably with traditional methods under single environments. Under multiple environments, predicting non-field evaluated lines benefits from borrowing information from lines that were evaluated in other environments. Accounting for environmental noise and other factors, also this model gave comparable accuracy with traditional methods, but higher compared to the single environment model. The GS accuracy was comparable in genomic selection index, aboveground dry biomass yield and plant height, while it was lower for the dry mass fraction of the fresh weight. The genomic selection model performances obtained in our pilots are high enough to sustain sorghum breeding for several traits including antioxidants production and allow important genetic gains per unit of time and cost.
2

Yoshida, Ruriko, Hisayuki Hara, and Patrick M. Saluke. "Sequential Importance Sampling for Logistic Regression Model." In Computational Models for Biomedical Reasoning and Problem Solving, 231–55. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7467-5.ch009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Logistic regression is one of the most popular models to classify in data science, and in general, it is easy to use. However, in order to conduct a goodness-of-fit test, we cannot apply asymptotic methods if we have sparse datasets. In the case, we have to conduct an exact conditional inference via a sampler, such as Markov Chain Monte Carlo (MCMC) or Sequential Importance Sampling (SIS). In this chapter, the authors investigate the rejection rate of the SIS procedure on a multiple logistic regression models with categorical covariates. Using tools from algebra, they show that in general SIS can have a very high rejection rate even though we apply Linear Integer Programming (IP) to compute the support of the marginal distribution for each variable. More specifically, the semigroup generated by the columns of the design matrix for a multiple logistic regression has infinitely many “holes.” They end with application of a hybrid scheme of MCMC and SIS to NUN study data on Alzheimer disease study.

Conference papers on the topic "Regression Monte-Carlo scheme":

1

Pidaparthi, Bharath, and Samy Missoum. "A Multi-Fidelity Approach for Reliability Assessment Based on the Probability of Model Inconsistency." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-90115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Most multi-fidelity schemes rely on regression surrogates, such as Gaussian Processes, to combine low- and high-fidelity data. Contrary to these approaches, we propose a classification-based multi-fidelity scheme for reliability assessment. This multi-fidelity technique leverages low- and high-fidelity model evaluations to locally construct the failure boundaries using support vector machine (SVM) classifiers. These SVMs can subsequently be used to estimate the probability of failure using Monte Carlo Simulations. At the core of this multi-fidelity scheme is an adaptive sampling routine driven by the probability of misclassification. This sampling routine explores sparsely sampled regions of inconsistency between low- and high-fidelity models to iteratively refine the SVM approximation of the failure boundaries. A lookahead check, which looks one step into the future without any model evaluations, is employed to selectively filter the adaptive samples. A novel model selection framework, which adaptively defines a neighborhood of no confidence around low fidelity model, is used in this study to determine if the adaptive samples should be evaluated with high- or low-fidelity model. The proposed multi-fidelity scheme is tested on a few analytical examples of dimensions ranging from 2 to 10, and finally applied to assess the reliability of a miniature shell and tube heat exchanger.
2

Chao, Manuel Arias, Darrel S. Lilley, Peter Mathé, and Volker Schloßhauer. "Calibration and Uncertainty Quantification of Gas Turbine Performance Models." In ASME Turbo Expo 2015: Turbine Technical Conference and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/gt2015-42392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Calibration and uncertainty quantification for gas turbine (GT) performance models is a key activity for GT manufacturers. The adjustment between the numerical model and measured GT data is obtained with a calibration technique. Since both, the calibration parameters and the measurement data are uncertain the calibration process is intrinsically stochastic. Traditional approaches for calibration of a numerical GT model are deterministic. Therefore, quantification of the remaining uncertainty of the calibrated GT model is not clearly derived. However, there is the business need to provide the probability of the GT performance predictions at tested or untested conditions. Furthermore, a GT performance prediction might be required for a new GT model when no test data for this model are available yet. In this case, quantification of the uncertainty of the baseline GT, upon which the new development is based on, and propagation of the design uncertainty for the new GT is required for risk assessment and decision making reasons. By using as a benchmark a GT model, the calibration problem is discussed and several possible model calibration methodologies are presented. Uncertainty quantification based on both a conventional least squares method and a Bayesian approach will be presented and discussed. For the general nonlinear model a fully Bayesian approach is conducted, and the posterior of the calibration problem is computed based on a Markov Chain Monte Carlo simulation using a Metropolis-Hastings sampling scheme. When considering the calibration parameters dependent on operating conditions, a novel formulation of the GT calibration problem is presented in terms of a Gaussian process regression problem.

To the bibliography