Dissertations / Theses on the topic 'Kriging and cokriging models'

To see the other types of publications on this topic, follow the link: Kriging and cokriging models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Kriging and cokriging models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Adu, Agyemang Adela Beauty. "Vulnerability Assessment of Groundwater to NO3 Contamination Using GIS, DRASTIC Model and Geostatistical Analysis." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3264.

Full text
Abstract:
The study employed Geographical Information System (GIS) technology to investigate the vulnerability of groundwater to NO3 content in Buncombe County, North Carolina in two different approaches. In the first study, the spatial distribution of NO3 contamination was analyzed in a GIS environment using Kriging Interpolation. Cokriging interpolation was used to establish how NO3 relates to land cover types and depth to water table of wells in the county. The second study used DRASTIC model to assess the vulnerability of groundwater in Buncombe County to NO3 contamination. To get an accurate vulnerability index, the DRASTIC parameters were modified to fit the hydrogeological settings of the county. A final vulnerability map was created using regression based DRASTIC, a statistic method to measure how NO3 relates to each of the DRASTIC variables. Although the NO3 concentration in the county didn’t exceed the USEPA standard limit (10mg/L), some areas had NO3 as high as 8.5mg/L.
APA, Harvard, Vancouver, ISO, and other styles
2

YATES, SCOTT RAYMOND. "GEOSTATISTICAL METHODS FOR ESTIMATING SOIL PROPERTIES (KRIGING, COKRIGING, DISJUNCTIVE)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187990.

Full text
Abstract:
Geostatistical methods were investigated in order to find efficient and accurate means for estimating a regionalized random variable in space based on limited sampling. The random variables investigated were (1) the bare soil temperature (BST) and crop canopy temperature (CCT) which were collected from a field located at the University of Arizona's Maricopa Agricultural Center, (2) the bare soil temperature and gravimetric moisture content (GMC) collected from a field located at the Campus Agricultural Center and (3) the electrical conductivity (EC) data collected by Al-Sanabani (1982). The BST was found to exhibit strong spatial auto-correlation (typically greater than 0.65 at 0⁺ lagged distance). The CCT generally showed a weaker spatial correlation (values varied from 0.15 to 0.84) which may be due to the length of time required to obtain an "instantaneous" sample as well as wet soil conditions. The GMC was found to be strongly spatially dependent and at least 71 samples were necessary in order to obtain reasonably well behaved covariance functions. Two linear estimators, the ordinary kriging and cokriging estimators, were investigated and compared in terms of the average kriging variance and the sum of squares error between the actual and estimated values. The estimate was obtained using the jackknifing technique. The results indicate that a significant improvement in the average kriging variance and the sum of squares could be expected by using cokriging for GMC and including 119 BST values in the analysis. A nonlinear estimator in one variable, the disjunctive kriging estimator, was also investigated and was found to offer improvements over the ordinary kriging estimator in terms of the average kriging variance and the sum of squares error. It was found that additional information at the estimation site is a more important consideration than whether the estimator is linear or nonlinear. Disjunctive kriging produces an estimator of the conditional probability that the value at an unsampled location is greater than an arbitrary cutoff level. This latter feature of disjunctive kriging is explored and has implications in aiding management decisions.
APA, Harvard, Vancouver, ISO, and other styles
3

Long, Andrew Edmund. "Cokriging, kernels, and the SVD: Toward better geostatistical analysis." Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186892.

Full text
Abstract:
Three forms of multivariate analysis, one very classical and the other two relatively new and little-known, are showcased and enhanced: the first is the Singular Value Decomposition (SVD), which is at the heart of many statistical, and now geostatistical, techniques; the second is the method of Variogram Analysis, which is one way of investigating spatial correlation in one or several variables; and the third is the process of interpolation known as cokriging, a method for optimizing the estimation of multivariate data based on the information provided through variogram analysis. The SVD is described in detail, and it is shown that the SVD can be generalized from its familiar matrix (two-dimensional) case to three, and possibly n, dimensions. This generalization we call the "Tensor SVD" (or TSVD), and we demonstrate useful applications in the field of geostatistics (and indicate ways in which it will be useful in other areas). Applications of the SVD to the tools of geostatistics are described: in particular, applications dependent on the TSVD, including variogram modelling in coregionalization. Variogram analysis in general is explored, and we propose broader use of an old tool (which we call the "corhogram ", based on the variogram) which proves useful in helping one choose variables for multivariate interpolation. The reasoning behind kriging and cokriging is discussed, and a better algorithm for solving the cokriging equations is developed, which results in simultaneous kriging estimates for comparison with those obtained from cokriging. Links from kriging systems to kernel systems are made; discovering kerneIs equivalent to kriging systems will be useful in the case where data are plentiful. Finally, some results of the application of geostatistical techniques to a data set concerning nitrate pollution in the West Salt River Valley of Arizona are described.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Crystal. "Using Kriging, Cokriging, and GIS to Visualize Fe and Mn in Groundwater." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2498.

Full text
Abstract:
For aesthetic, economic, and health-related reasons, allowable concentrations of iron (Fe) and manganese (Mn) found present in drinking water are 0.3 mg/L and 0.05 mg/L, respectively. Water samples taken from private drinking wells in the rural communities within Buncombe County, North Carolina contain amounts of these metals in concentrations higher than the suggested limits. This study focused on bedrock geology, elevation, saprolite thickness, and well depth to determine factors affecting Fe and Mn. Using ArcGIS 10.2, spatial trends in Fe and Mn concentrations ranges were visualized, and estimates of the metal concentrations were interpolated to unmonitored areas. Results from this analysis were used to create a map that delineates the actual spatial distribution of Fe and Mn. The study also established a statistically significant correlation between Fe and Mn concentrations, which can be attributed to bedrock geology. Additionally, higher Fe in groundwater was concentrated in shallower wells and valley areas.
APA, Harvard, Vancouver, ISO, and other styles
5

Hemmati, Sahar. "Steady-State Co-Kriging Models." Thesis, West Virginia University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10614907.

Full text
Abstract:

In deterministic computer experiments, a computer code can often be run at different levels of complexity/fidelity and a hierarchy of levels of code can be obtained. The higher the fidelity and hence the computational cost, the more accurate output data can be obtained. Methods based on the co-kriging methodology Cressie (2015) for predicting the output of a high-fidelity computer code by combining data generated to varying levels of fidelity have become popular over the last two decades. For instance, Kennedy and O’Hagan (2000) first propose to build a metamodel for multi-level computer codes by using an auto-regressive model structure. Forrester et al. (2007) provide details on estimation of the model parameters and further investigate the use of co-kriging for multi-fidelity optimization based on the efficient global optimization algorithm Jones et al. (1998). Qian and Wu (2008) propose a Bayesian hierarchical modeling approach for combining low-accuracy and high-accuracy experiments. More recently, Gratiet and Cannamela (2015) propose sequential design strategies using fast cross-validation techniques for multi-fidelity computer codes.

This research intends to extend the co-kriging metamodeling methodology to study steady-state simulation experiments. First, the mathematical structure of co-kriging is extended to take into account heterogeneous simulation output variances. Next, efficient steady-state simulation experimental designs are investigated for co-kriging to achieve a high prediction accuracy for estimation of steady-state parameters. Specifically, designs consisting of replicated longer simulation runs at a few design points and replicated shorter simulation runs at a larger set of design points will be considered. Also, design with no replicated simulation runs at long simulation is studied, along with different methods for calculating the output variance in absence of replicated outputs.

Stochastic co-kriging (SCK) method is applied to an M/M/1, as well as an M/M/5 queueing system. In both examples, the prediction performance of the SCK model is promising. It is also shown that the SCK method provides better response surfaces compared to the SK method.

APA, Harvard, Vancouver, ISO, and other styles
6

Watanabe, Jorge. "Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/44/44137/tde-14082008-165227/.

Full text
Abstract:
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa.
This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
APA, Harvard, Vancouver, ISO, and other styles
7

Araújo, Cristina da Paixão. "Uso de informação secundária imprecisa e inacurada no planejamento de curto prazo." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/127891.

Full text
Abstract:
No setor de mineração, a amostragem está presente no empreendimento mineral desde a fase da exploração até a lavra. Para diminuir a incerteza na previsão de teores, o planejamento de lavra requer adensamento da amostragem para garantir previsões acuradas e precisas. Acredita-se, que quanto maior a quantidade de amostras, maior a confiabilidade nas estimativa de teores. Na fase exploração, geralmente, a amostragem é realizada por furos de sondagem com coroas diamantadas, que é uma técnica com alto custo de execução e produz amostras com acuracidade e precisão. Nesta fase, existem poucos dados com alta qualidade. Já na fase operacional, a amostragem é realizada por outras técnicas devido a restrições orçamentárias e ao alto custo de execução da sondagem diamantada. Em geral, estas amostras possuem baixa qualidade (imprecisas e inacuradas) e não são submetidas a protocolos de controle que qualidade. Logo, nesta fase existem muitos dados com baixa qualidade com erro de vies e precisao. Esta dissertação avalia o impacto do uso de dados imprecisos no planejamento de curto prazo. Para isto, foram analisados dois bancos de dados distintos. O primeiro estudo utiliza o banco de dados exaustivo Walker Lake, que foi usado e considerado como o teor real do depósito. Inicialmente, as amostras foram obtidas a partir do conjunto de dados com espaçamento regular de 20×20 m e 5×5 m, a partir do banco de dados exaustivo. Um erro relativo de ±25% (imprecisão) e 10% de viés foram adicionados aos dados espaçados a 5×5 m (dados geológicos curto prazo) em diferentes cenários. Depois foram estudadas diferentes metodologias para incorporar a informação imprecisa nas estimativas. O segundo estudo é realizado em uma mina de ouro, com dois tipos de dados diferentes, a furos de sondagem (dados primários) e circulação reversa (dados secundários). Nestes estudos foram investigadas duas metodologias: cokrigagem e krigagem ordinária, e os dados foram utilizados para estimar blocos. As curvas teor tonelagem, análise de deriva e a classificação errônea dos blocos foram avaliadas para cada estudo. Para o banco de dados, Walker Lake, os resultados mostraram que o uso da cokrigagem ordinária estandardizada é a melhor metodologia em situações que existem dados imprecisos e enviesados, com boa correlação entre as variáveis primárias e secundárias. As estimativas produzidas são mais próximas da distribuição real dos blocos, reduzindo o erro de classificação dos blocos. Já para o banco de dados de Ouro, as amostras possuem moderada correlaçao e continuidade espacial curta para pequenas distâncias do depósito. Nesta situação, a correção da imprecisão da variável secundária utilizando a krigagem ordinária produziram melhores resultados com estimativas menos enviesadas e melhor classificação dos blocos como minério e estéril.
Decisions starting at mineral exploration through mining are based on grade block models obtained from samples. To decrease the uncertainty in the estimates, the short term mining planning requires additional sampling to ensure accurate and precise predictions. As more samples are made available, there is trend towards more reliable estimates. In the exploration stage, usually, sampling is performed by diamond drill holes (DDH), which are expensive but produces accurate and precise samples. In this stage there are few data with high quality. In the production stage, sampling is obtained by other techniques due to the high costs of DDHs. In general, these samples have low quality and are not controlled by QA / QC protocols. This study evaluates the impact of using imprecise data in short-term mineplanning. For this, it was analyzed two different data sets. The first case used the exhaustive Walker Lake dataset as the source to obtain the true and sampled grades. Initially, samples were obtained from the exhaustive dataset at regularly spaced grids at 20 × 20 m and 5 × 5 meters. A relative error (imprecision) of ± 25% and a 10% bias were added to the data spaced at 5 × 5 m (short-term geological data) in different scenarios. The second study is in a gold mine with two different types of data obtained from diamond drilling holes (DDH_Hard data) and reverse circulation (RC_Soft data).To combine these different types of data, two methodologies were investigated: cokriging and ordinary kriging. Both types of data were used to estimate a block model using the two methodologies. The grade tonnage curves and swath plots were used to compare the results against the true block grades at the same block support. In addition, the block misclassification was evaluated. In the Walker Lake the results show that standardized ordinary cokriging is a better methodology for imprecise and biased data and produces estimates closer to the true grade block distribution, reducing block misclassification. For the data set at the underground mine gold, the samples had moderate correlation and short spatial continuity for small distances at this deposit. In this situation, the estimates using ordinary kriging with hard and soft data (standardized and re-escaled) produced better results with less bias and better blocks classification of ore and waste.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xiang. "Two kriging models, and the expanded readsold package." Ohio University / OhioLINK, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1183382153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Muré, Joseph. "Objective Bayesian analysis of Kriging models with anisotropic correlation kernel." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC069/document.

Full text
Abstract:
Les métamodèles statistiques sont régulièrement confrontés au manque de données qui engendre des difficultés à estimer les paramètres. Le paradigme bayésien fournit un moyen élégant de contourner le problème en décrivant la connaissance que nous avons des paramètres par une loi de probabilité a posteriori au lieu de la résumer par une estimation ponctuelle. Cependant, ce paradigme nécessite de définir une loi a priori adéquate, ce qui est un exercice difficile en l'absence de jugement d'expert. L'école bayésienne objective propose des priors par défaut dans ce genre de situation telle que le prior de référence de Berger-Bernardo. Un tel prior a été calculé par Berger, De Oliveira and Sansó [2001] pour le modèle de krigeage avec noyau de covariance isotrope. Une extension directe au cas des noyaux anisotropes poserait des problèmes théoriques aussi bien que pratiques car la théorie de Berger-Bernardo ne peut s'appliquer qu'à un jeu de paramètres ordonnés. Or dans ce cas de figure, tout ordre serait nécessairement arbitraire. Nous y substituons une solution bayésienne objective fondée sur les posteriors de référence conditionnels. Cette solution est rendue possible par une théorie du compromis entre lois conditionnelles incompatibles. Nous montrons en outre qu'elle est compatible avec le krigeage trans-gaussien. Elle est appliquée à un cas industriel avec des données non-stationnaires afin de calculer des Probabilités de Détection de défauts (POD de l'anglais Probability Of Detection) par tests non-destructifs dans les tubes de générateur de vapeur de centrales nucléaires
A recurring problem in surrogate modelling is the scarcity of available data which hinders efforts to estimate model parameters. The Bayesian paradigm offers an elegant way to circumvent the problem by describing knowledge of the parameters by a posterior probability distribution instead of a pointwise estimate. However, it involves defining a prior distribution on the parameter. In the absence of expert opinion, finding an adequate prior can be a trying exercise. The Objective Bayesian school proposes default priors for such can be a trying exercise. The Objective Bayesian school proposes default priors for such situations, like the Berger-Bernardo reference prior. Such a prior was derived by Berger, De Oliveira and Sansó [2001] for the Kriging surrogate model with isotropic covariance kernel. Directly extending it to anisotropic kernels poses theoretical as well as practical problems because the reference prior framework requires ordering the parameters. Any ordering would in this case be arbitrary. Instead, we propose an Objective Bayesian solution for Kriging models with anisotropic covariance kernels based on conditional reference posterior distributions. This solution is made possible by a theory of compromise between incompatible conditional distributions. The work is then shown to be compatible with Trans-Gaussian Kriging. It is applied to an industrial case with nonstationary data in order to derive Probability Of defect Detection (POD) by non-destructive tests in steam generator tubes of nuclear power plants
APA, Harvard, Vancouver, ISO, and other styles
10

Asritha, Kotha Sri Lakshmi Kamakshi. "Comparing Random forest and Kriging Methods for Surrogate Modeling." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20230.

Full text
Abstract:
The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions. This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Abel Brasil Ramos da. "Estimation of curves indifference accessibility via urban models and ordered kriging." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=10042.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
O processo de urbanizaÃÃo, crescimento das cidades e estruturaÃÃo urbana ocorrido nas Ãltimas dÃcadas nas grandes cidades brasileiras vem colocando a questÃo da acessibilidade como fator relevante na qualidade de vida da populaÃÃo. Neste contexto, analisar rigorosamente o nÃvel de acessibilidade e o bem-estar dos indivÃduos a partir do momento que deixam suas residÃncias atà o ponto de execuÃÃo de atividades ou satisfaÃÃo de consumo torna-se uma questÃo de grande importÃncia cientÃfica ainda pouco explorada de maneira rigorosa. Nesta dissertaÃÃo buscamos analisar e modelar acessibilidade considerando uma perspectiva teÃrica baseada na metodologia da maximizaÃÃo da utilidade e na estimaÃÃo de modelos economÃtricos. Para tanto, este estudo està dividindo em dois eixos de pesquisa: o primeiro, analisa a acessibilidade com o uso de modelos ordenados generalizados atravÃs de uma base inÃdita de micro dados geo-referenciados coletada na cidade de Fortaleza, Brasil. Os resultados mostram que variÃveis como renda, posse de automÃveis, distÃncia, entre outras, sÃo importantes para explicar a acessibilidade dos indivÃduos. O segundo eixo de anÃlise propÃe e desenvolve, de maneira pioneira, uma superfÃcie de utilidade espacial atravÃs de tÃcnicas de krigagem. Os resultados mostram que a distÃncia entre o domicÃlio e o ponto de destino possui uma relaÃÃo bastante heterogÃnea com a acessibilidade, revelando um padrÃo espacial influenciado pela desigualdade econÃmica da cidade. Esse resultado coloca em dÃvida suposiÃÃes simplistas tradicionais que assumem uma relaÃÃo linear ou polinomial entre distÃncia e acessibilidade.
The process of urbanization, growth of cities and urban structuring in recent decades among large Brazilian cities revealed the issue of accessibility as a relevant factor in quality of life. In this sense, analyzing the level of accessibility and welfare of individuals from where they leave their homes up to the point of execution of activities or consumer satisfaction becomes a matter of great scientific importance, yet to be explored in rigorously way. Thus, in this dissertation we analyze and model urban accessibility considering a theoretical perspective based on the methodology of utility maximization and estimation of econometric models. Therefore, this study is divided into two lines of research. The first one analyzes the accessibility using generalized ordered models through a new geo-referenced micro data set collected in the city of Fortaleza, Brazil. Our results show that variables such as income, car ownership, distance, and others are important for explaining accessibility of individuals. The second line of inquiry proposes and develops, in a pioneering way, a surface of spatial utility by means of Kriging techniques. The results point to the fact that the distance between home and destination has a very heterogeneous relationship with accessibility, revealing a spatial pattern greatly influenced by the prevailing economic inequalities all over the city. This result puts into question simplistic traditional assumptions that assume a linear or polynomial relation between distance and accessibility.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Zeyu. "Reliability Analysis and Updating with Meta-models: An Adaptive Kriging-Based Approach." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1574789534726544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bean, Brennan L. "Interval-Valued Kriging Models with Applications in Design Ground Snow Load Prediction." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7579.

Full text
Abstract:
One critical consideration in the design of buildings constructed in the western United States is the weight of settled snow on the roof of the structure. Engineers are tasked with selecting a design snow load that ensures that the building is safe and reliable, without making the construction overly expensive. Western states use historical snow records at weather stations scattered throughout the region to estimate appropriate design snow loads. Various mapping techniques are then used to predict design snow loads between the weather stations. Each state uses different mapping techniques to create their snow load requirements, yet these different techniques have never been compared. In addition, none of the current mapping techniques can account for the uncertainty in the design snow load estimates. We address both issues by formally comparing the existing mapping techniques, as well as creating a new mapping technique that allows the estimated design snow loads to be represented as an interval of values, rather than a single value. In the process, we have improved upon existing methods for creating design snow load requirements and have produced a new tool capable of handling uncertain climate data.
APA, Harvard, Vancouver, ISO, and other styles
14

Johansson, Björn. "Statistical Methods for Mineral Models of Drill Cores." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279848.

Full text
Abstract:
In modern mining industry, new resource efficient and climate resilient methods have been gaining traction. Commissioned efforts to improve the efficiency of European mining is further helping us to such goals. Orexplore AB's X-ray technology for analyzing drill cores is currently involved in two such project. Orexplore AB wishes to incorporate geostatistics (spatial statistics) into their analyzing process in order to further extend the information gained from the mineral data. The geostatistical method implemented here is ordinary kriging which is an interpolation method that, given measured data, predicts intermediate values governed by prior covariance models. Ordinary kriging facilitates prediction of mineral concentrations on a continuous grid in 1-D up to 3-D. Intermediate values are predicted on a Gaussian process regression line, governed by prior covariances. The covariance is modeled by fitting a model to a calculated experimental variogram. Mineral concentrations are available along the lateral surface of the drill core. Ordinary kriging is implemented to sequentially predict mineral concentrations on shorter sections of the drill core, one mineral at a time. Interpolation of mineral concentrations is performed on the data considered in 1-D and 3-D. The validation is performed by calculating the corresponding density at each section that concentrations are predicted on and compare each such value to measured densities. The performance of the model is evaluated by subjective visual evaluation of the fit of the interpolation line, its smoothness, together with the variance. Moreover, the fit is tested through cross-validation using different metrics that evaluates the variance and prediction errors of different models. The results concluded that this method accurately reproduces the measured concentrations while performing well according to the above mentioned metrics, but does not outperform the measured concentrations when evaluated against the measured densities. However, the method was successful in providing information of the minerals in the drill core by producing mineral concentrations on a continuous grid. The method also produced mineral concentrations in 3-D that reproduced the measured densities well. It can be concluded that ordinary kriging implemented according to the methodology described in this report efficiently produces mineral concentrations that can be used to obtain information of the distribution of concentrations in the interior of the drill core.
I den moderna gruvindustrin har nya resurseffektiva och klimatbeständiga metoder ökat i efterfråga. Beställda projekt för att förbättra effektiviteten gällande den europeiska gruvdriften bidrar till denna effekt ytterligare. Orexplore AB:s röntgenteknologi för analys av borrkärnor är för närvarande involverad i två sådana projekt. Orexplore AB vill integrera geostatistik (spatial statistik) i sin analysprocess för att ytterligare vidga informationen från mineraldatan. Den geostatistiska metoden som implementeras här är ordinary kriging, som är en interpolationsmetod som, givet uppmätta data, skattar mellanliggande värden betingade av kovariansmodeller. Ordinary kriging tillåter skattning av mineralkoncentrationer på ett kontinuerligt nät i 1-D upp till 3-D. Mellanliggande värden skattas enligt en Gaussisk process-regressionslinje. Kovariansen modelleras genom att passa en modell till ett beräknat experimentellt variogram. Mineralkoncentrationer är tillgängliga längs borrkärnans mantelyta. Ordinary kriging implementeras för att sekventiellt skatta mineralkoncentrationer på kortare delar av borrkärnan, ett mineral i taget. Interpolering av mineralkoncentrationer utförs på datan betraktad i 1-D och 3-D. Valideringen utförs genom att utifrån de skattade koncentrationerna beräkna den motsvarande densiteten vid varje sektion som koncentrationer skattas på och jämföra varje sådant värde med uppmätta densiteter. Undersökning av modellen utförs genom subjektiv visuell utvärdering av interpolationslinjens passning av datan, dess mjukhet, tillsammans med variansen. Dessutom testas passformen genom korsvalidering med olika mätvärden som utvärderar varians- och skattningsfel för olika modeller. Slutsatsen från resultaten är att denna metod reproducerar de uppmätta koncentrationerna väl samtidigt som den presterar bra enligt de mätvärden som utvärderas, men överträffar ej de uppmätta koncentrationerna vid utvärdering mot de uppmätta densiteterna. Metoden var emellertid framgångsrik med att tillhandahålla information om mineralerna i borrkärnan genom att producera mineralkoncentrationer på ett kontinuerligt rutnät. Metoden producerade också mineralkoncentrationer i 3-D som reproducerade de uppmätta densiteterna väl. Slutsatsen dras att ordinary kriging, implementerad enligt den metod som beskrivs i denna rapport, effektivt skattar mineralkoncentrationer som kan användas för att få information om fördelningen av koncentrationer i det inre av borrkärnan.
APA, Harvard, Vancouver, ISO, and other styles
15

Lloyd, Christopher David. "Non-stationary models for optimal sampling and mapping of terrain in Great Britain." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.323957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Asfaw, Zeytu Gashaw. "Inference and Prediction in Non-stationary Stochastic Models: Survival Analysis and Kriging Interpolation." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yin, Hong. "Kriging model approach to modeling study on relationship between molecular quantitative structures and chemical properties." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/598.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Arnoult, Guillaume. "Modélisation de la trajectoire d'un projectile gyrostabilisé muni d'un dispositif de contrôle." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST068.

Full text
Abstract:
L’intensification des combats en milieu urbains pousse les industriels de l’armement terrestre à développer des systèmes d’armes intégrant des dispositifs de correction de trajectoire. Le déploiement de tels dispositifs au cours du vol d’un projectile d’artillerie doit permettre de réduire l’erreur de dispersion afin de limiter au maximum les risques de dommages collatéraux. L’enjeu réside ici dans le développement d’un dispositif adapté aux conditions de vol d’un projectile gyrostabilisé. Un spoiler isolé, supposé monté sur une bague libre en rotation, est identifié comme le moyen de contrôle le plus adapté. L’objet de ces travaux consiste à développer un algorithme d’optimisation des caractéristiques géométriques d’un tel dispositif et de démontrer qu’il possède l’autorité suffisante pour générer une modification de la portée ainsi que de la déviation latérale du projectile. D’une part un réseau de neurones modélise les variations des coefficients aérodynamiques du spoiler à partir de résultats de simulations RANS. D’autre part, la modélisation par kriging des fonctions objectifs et contraintes tire avantage de l’estimation de l’erreur de modélisation. Ceci permet de définir des critères d’enrichissement assurant un compromis entre exploration et exploitation du domaine défini par l’ensemble des paramètres géométriques. L’application de l’algorithme d’optimisation au dimensionnement du spoiler a permis d’identifier une configuration géométrique optimale satisfaisant les objectifs de l’étude en termes de correction de trajectoire. Des simulations ZDES sur cette configuration particulière ont été réalisées dans le but de constituer un niveau de fidélité supérieur aux évaluations RANS des coefficients aérodynamiques. Elles donnent également lieu à une caractérisation physique des modifications de l’écoulement de culot engendrées par la présence du spoiler. Une campagne d’essais en soufflerie permet de valider la démarche méthodologique développée dans ces travaux et ouvre des perspectives pour de futurs travaux concernant l’inclusion de données expérimentales dans une base de données numérique dans le cadre de méta-modèles multi-niveaux de fidélité
The intensification of urban combat encourages the industrials of terrestrial armament to develop new weapon systems equipped with trajectory modification devices. Deploying these devices during the projectile flight would allow reducing the scattering error in order to narrow the collateral damage. The challenge lies in the development of a device adapted to the flight conditions of a spin-stabilized projectile. An isolated spoiler, installed on a rotatable ring, is chosen as the most adapted control device. This work consists in developing an optimization algorithm for the geometrical parameters of the spoiler and to demonstrate that it is possible to modify concurrently the range and lateral deviation of the projectile. On one hand a neural network model the variations of the aerodynamic coefficients from RANS calculations. On the other hand, the kriging modeling of the objective and constraint functions benefits from the estimation of the modeling error. This allows defining enrichment criteria ensuring a tradeoff between exploration and exploitation of the geometrical domain. The optimization of the spoiler geometrical parameters leads to the identification of an optimal configuration able to achieve the course corrections abilities targeted. ZDES simulations on this particular configuration have been achieved to form a new fidelity level in addition to the RANS evaluations of the aerodynamic coefficients. These simulations lead to a physical characterization of modifications of the boat-tail flow induced by the presence of the spoiler. A wind tunnel campaign provides a validation step to the optimization methodology developed and offers promising perspectives for future work in terms of experimental data inclusion in a numerical database through multi-level surrogate modeling
APA, Harvard, Vancouver, ISO, and other styles
19

Hamad, Rahel. "GIS i kommunal verksamhetsriskanalys vid planering av grundvattenmagasin." Thesis, Linköping University, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6394.

Full text
Abstract:

Ingen vätska kan ersätta vatten och utan vatten kan inget liv existera. Markens fysikaliska egenskaper och kemiska förhållanden styr spridningen av de föroreningsämnen som före¬kommer i mark och i vatten.

Katrineholms kommun vill i framtiden kunna utnyttja vattenmagasinen i Forssjö. Ett antal observationsrör har placerats i Forssjö, vilket är beläget cirka 8 km sydost om Katrineholm. I dagsläget används inte GIS i kommunen och detta gjorde att jag blev intresserad av vilka möjligheter som GIS skulle kunna tillföra med utgångspunkt i den brunnsdatabas med vatten¬kvalitetsmätningar som finns i kommunen.

Detta examensarbete består av två delar. Den första delen granskar risken för föroreningar från den del av väg 52 mellan Katrineholm och Nyköping som passerar Katrineholmsåsen sydost om Katrineholm. Inom riskkartan visade det sig att de jordlager som täcker akviferen, Katrineholmsåsen, inte ger naturligt skydd åt grundvattnet mot nedträngande föroreningarna. Den metod som används för att beräkna riskerna för förorenat grundvatten är den hydrogeologiska sårbarhetsmodell som Lena Maxe och Per-Olof Johansson har utarbetat i Bedömning av grundvattnets sårbarhet, 1998.

I Katrineholms kommun finns fullständiga data om jordarter och en utmärkt brunnsdatabas. För att utredda risken för en förorening av akviferen i Katrineholmsåsen från väg 52, behövs ett bra verktyg. Detta arbete kommer att visa hur och på vilka sätt GIS kan utföra detta. Under arbetets gång har kontakt tagits med Räddningsverket i Karlstad. Via den kontakten har mycket värdefull information skaffats, till exempel hur genomströmnings¬hastigheter beräknas för föroreningsämnen, vilka program som de använder för att beräkna risker vid spill och utsläpp av kemikalier.

I andra delen av mitt arbete har jag granskat vilken metod som lämpar sig bäst för interpolering av situationen mellan mätpunkterna; kriging eller cokriging, i geostatistiska analyser. För att hitta den bäst anpassade modellen har jag använt ekvationen: fel = r = ύ – v. Här gäller det att söka en modell som ger felet ett värde så nära noll som möjligt.

Utryckning vid utsläpp av bensin, diesel och övriga petroleumprodukter utgjorde 75 % av fallen enligt Räddningsverket insatsstatistik 2000 – 2003. Dessa ämnen ingår i gruppen kemikalier med benämningen NAPL, Non Aqueous Phase Liquid. I mitt arbete har jag koncentrerat mig på hur utsläpp av denna grupp av kemikalier på väg 52 skulle kunna förorena akviferen i Katrineholmsåsen.

APA, Harvard, Vancouver, ISO, and other styles
20

Edwards, Adam Michael. "Precision Aggregated Local Models." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102125.

Full text
Abstract:
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements.
Doctor of Philosophy
Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
APA, Harvard, Vancouver, ISO, and other styles
21

Quirante, Natalia. "Rigorous Design of Chemical Processes: Surrogate Models and Sustainable Integration." Doctoral thesis, Universidad de Alicante, 2017. http://hdl.handle.net/10045/74373.

Full text
Abstract:
El desarrollo de procesos químicos eficientes, tanto desde un punto de vista económico como desde un punto de vista ambiental, es uno de los objetivos principales de la Ingeniería Química. Para conseguir este propósito, durante los últimos años, se están empleando herramientas avanzadas para el diseño, simulación, optimización y síntesis de procesos químicos, las cuales permiten obtener procesos más eficientes y con el menor impacto ambiental posible. Uno de los aspectos más importantes a tener en cuenta para diseñar procesos más eficientes es la disminución del consumo energético. El consumo energético del sector industrial a nivel global representa aproximadamente el 22.2 % del consumo energético total, y dentro de este sector, la industria química representa alrededor del 27 %. Por lo tanto, el consumo energético de la industria química a nivel global constituye aproximadamente el 6 % de toda la energía consumida en el mundo. Además, teniendo en cuenta que la mayor parte de la energía consumida es generada principalmente a partir de combustibles fósiles, cualquier mejora de los procesos químicos que reduzca el consumo energético supondrá una reducción del impacto ambiental. El trabajo recopilado en esta Tesis Doctoral se ha llevado a cabo dentro del grupo de investigación COnCEPT, perteneciente al Instituto Universitario de Ingeniería de los Procesos Químicos de la Universidad de Alicante, durante los años 2014 y 2017. El objetivo principal de la presente Tesis Doctoral se centra en el desarrollo de herramientas y modelos de simulación y optimización de procesos químicos con el fin de mejorar la eficiencia energética de éstos, lo que conlleva a la disminución del impacto ambiental de los procesos. Más concretamente, esta Tesis Doctoral se compone de dos estudios principales, que son los objetivos concretos que se pretenden conseguir: - Estudio y evaluación de los modelos surrogados para la mejora en la optimización basada en simuladores de procesos químicos. - Desarrollo de nuevos modelos para la optimización de procesos químicos y la integración de energía simultánea, para redes de intercambiadores de calor.
APA, Harvard, Vancouver, ISO, and other styles
22

Bueno, Márcio Eduardo Boeira. "Mapeamento da variabilidade e análise espacial de atributos de qualidade físico-químicos dos frutos em pós-colheita e atributos de vigor da planta nas variedades Maxi Gala e Fuji Moore sobre pomar comercial em Vacaria /RS." Universidade do Estado de Santa Catarina, 2013. http://tede.udesc.br/handle/handle/1174.

Full text
Abstract:
Made available in DSpace on 2016-12-08T16:44:45Z (GMT). No. of bitstreams: 1 PGPV13MA125.pdf: 3392873 bytes, checksum: 229cea2db1dfb1e30341e4a122389186 (MD5) Previous issue date: 2013-04-14
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The cost of apple production has been increasing over the years, and the selling price international and domestic fruit is decreasing. Given this perspective, there is a need for better technification by the fruit growers. The Precision Agriculture (PA) is an indispensable tool for aggregating information in decision-making producers. This study aimed to use the concepts of AP, to design maps of fruit quality, the management of plant vigor and yield maps regionalized in apple orchards. The experiment was conducted in two areas of 0.90 and 1.44 ha of commercial production of Gala and Fuji varieties Maxi Moore, respectively, at São Paulino company RASIP in Vacaria - RS. The quality attributes of the physical-chemical post-harvest fruit were evaluated: average fruit diameter (CMF), firmness (FP), number of seeds per fruit (NSF) and soluble solids (TSS). The attributes of plant vigor were: canopy volume (CV), stem diameter (DC) and fertility index (FI). We also evaluated the production (P). We collected 75 samples for each parameter measured in a grid of 12x10m to 16x12m variety Maxi Gala and Fuji variety to Moore. Analysis was performed using descriptive statistics and spatial analysis through the semivariogram. Possession of the adjusted models in the analysis of the semivariogram was held by the interpolation method of kriging. After, we performed a simple correlation of the parameters and those with strong correlation (≥ 0.50) drew up the cross semivariogram and held by the interpolation method of cokriging. All measurable parameters of the physical and chemical attributes of the fruits in postharvest attributes of plant vigor and production of Gala and Fuji varieties Maxi Moore harvests in 2011 and 2012 showed spatial variability. The 12x10 m sampling grid for Maxi Gala variety was adequate, because only the parameter number of fruits per plant (NFP) in the 2011 range presented below the mesh. Already a 16x12 m sampling grid for the Fuji variety Moore was less suitable for the parameters number of seeds per fruit (NSF), pulp firmness (FP) and Cup Volume (CV), which showed lower reaches in the mesh two crops. The thematic maps of these parameters allowed the development of theoretical management units, seen not meet recommendations of class divisions for the varieties studied. Production parameters (P) versus Diameter Stem (DC) showed a strong correlation in the variety in the 2011 Maxi Gala. In 2011 and 2012 vintages parameters Volume Cup (VC) versus Diameter Stem (DC) also showed a strong correlation. Thus in the 2011 reduced the collection of 15 samples of crop production and in 2012 was reduced to collecting 20 samples of canopy volume, the method of cokriging. For the Fuji variety Moore parameters Volume Cup (VC) versus Diameter Stem (DC) showed a strong correlation in the 2011 and 2012 vintages. In the 2011 parameters showed dependence with the same number of samples by the cokriging method. Already in 2012 there was no such dependence
O custo de produção da maçã vem aumentando ao longo dos anos, sendo que o preço de venda internacional e nacional da fruta está diminuindo. Diante desta perspectiva, existe a necessidade de uma melhor tecnificação por parte dos fruticultores. A Agricultura de precisão (AP) é uma ferramenta indispensável para agregar informações na tomada de decisão dos produtores. O presente trabalho teve como objetivo utilizar os conceitos de AP, para projetar mapas da qualidade de frutos, do manejo do vigor das plantas e mapas de produtividade regionalizada na cultura da macieira. O experimento foi conduzido em duas áreas de 0,90 e 1,44 ha de produção comercial das variedades Maxi Gala e Fuji Moore, respectivamente, na Fazenda São Paulino da empresa RASIP, em Vacaria RS. Os atributos de qualidade físico-químicos dos frutos em pós-colheita avaliados foram: calibre médio dos frutos (CMF), firmeza de polpa (FP), número de sementes por fruto (NSF) e sólidos solúveis totais (SST). Os atributos de vigor das plantas avaliados foram: volume de copa (VC), diâmetro de caule (DC) e índice de fertilidade (IF). Avaliou-se também a produção (P). Foram coletadas 75 amostras para cada parâmetro mensurável em uma malha de 12x10m para variedade Maxi Gala e 16x12m para variedade Fuji Moore. Foi realizada análise estatística descritiva dos dados e a análise espacial através dos semivariogramas. De posse dos modelos ajustados na análise dos semivariogramas realizou-se a interpolação pelo método da krigagem. Após, foi realizada a correlação simples dos parâmetros e os que apresentaram forte correlação (≥ 0,50) elaborou-se os semivariogramas cruzados e realizou-se a interpolação pelo método da cokrigagem. Todos os parâmetros mensuráveis dos atributos físico-químicos dos frutos em pós-colheita, atributos de vigor das plantas e produção das variedades Maxi Gala e Fuji Moore nas safras 2011 e 2012 apresentaram variabilidade espacial. A malha de amostragem 12x10 m para a variedade Maxi Gala mostrou-se adequada, pois somente o parâmetro Número de Frutos por planta (NFP) na safra 2011 apresentou alcance inferior à malha. Já a malha de amostragem 16x12 m para a variedade Fuji Moore mostrou-se menos adequada para os parâmetros Número de Sementes por Fruto (NSF), Firmeza de polpa (FP) e Volume de Copa (VC), que apresentaram alcances inferiores à malha nas duas safras. Os mapas temáticos elaborados dos parâmetros avaliados permitiram a elaboração de unidades de manejo teóricas, visto não se encontrar recomendações de divisões de classes para as variedades estudadas. Os parâmetros Produção (P) versus Diâmetro de Caule (DC) apresentaram forte correlação na variedade Maxi Gala na safra 2011. Nas safras 2011 e 2012 os parâmetros Volume de Copa (VC) versus Diâmetro de Caule (DC) também apresentaram forte correlação. Desta forma na safra 2011 reduziu-se a coleta de 15 amostras da produção e na safra 2012 reduziu-se a coleta de 20 amostras do volume de copa, pelo método da cokrigagem. Para a variedade Fuji Moore os parâmetros Volume de Copa (VC) versus Diâmetro de Caule (DC) apresentaram forte correlação nas safras 2011 e 2012. Na safra 2011 os parâmetros apresentaram dependência com o mesmo número de amostras pelo método as cokrigagem. Já em 2012 não houve essa dependência
APA, Harvard, Vancouver, ISO, and other styles
23

Aidoo, Eric. "Geostatistical modelling of recreational fishing data: A fine-scale spatial analysis." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2016. https://ro.ecu.edu.au/theses/1813.

Full text
Abstract:
The sustainability of recreational fisheries resources rely on effective management of the fishery which includes monitoring of any changes in the fishery. In order to facilitate the ongoing management of the recreational fishery, an understanding of the spatial dynamics of catch per unit effort (catch rate), fishing effort and species diversity is important for fishery managers to make area-specific decisions and to develop strategies for ecosystem based fisheries management. These indices are critical components of information used to inform on recreational fishing activities and evaluate the changes in the fishery resources. Geostatistical techniques such as kriging can provide useful tools for characterising the spatial distributions of these indices. However, most recreational fishing data are highly skewed, zero-inflated and when expressed as ratios are impacted by the small number problem, which can influence estimates obtained from the traditional kriging. In addition, the use of recreational fishing data obtained through surveys may influence mapping and area-specific decisions as such data are associated with uncertainty. In Western Australia, recreational fishing has a participation rate of approximately 30%. Data for this thesis were collected from boat-based recreational fishers through phone-diary surveys at spatial resolution that supports spatial analysis and mapping through geostatistical techniques. In this thesis, geostatistical modelling techniques were used to analyse those recreational fishing data in the West Coast Bioregion of Australia, with the development and evaluation of a data transformation approach that takes into account data characteristics and uncertainty. As a first step in the analysis, a suitable kriging estimator for recreational fishing data was determined. This was based on the application of ordinary, ordinary indicator and Poisson kriging estimators for seven aquatic species with different behaviours and distribution patterns. Some of these estimators can handle different distribution properties including high skewness, zero-inflation and small number problems. In general, the indicator kriging performed similarly across species with different life history characteristics and distribution patterns and provided accurate estimates of catch rates for most of those species. To evaluate the incorporation of measurement uncertainty, the study presents a soft indicator kriging approach that uses a logistic function transformation, which is combined with probability field simulation to determine the effect of measurement uncertainty on mapping and fishing area delineation. The results suggest that the incorporation of the measurement uncertainty improves the ability to draw valid conclusions about the estimation results, which may influence any decision regarding the delineation of areas with high catch rates for spatial management. The recreational fishing data used also provided the basis for studying the spatial patterns in species diversity in the entire fishery. The analysis revealed that species diversity, dominance and evenness display similar spatial patterns on a global scale. The study highlighted the inherent spatial variability in catch rate, fishing effort and species diversity, illustrating areas with high values, or hotspots of these indices. This statistical-based modelling approach is important as it allows prediction of these indices in specific locations taking into account data characteristics and uncertainty. The estimated maps are important for supporting fishery resources management.
APA, Harvard, Vancouver, ISO, and other styles
24

Liu, Xuyuan. "Statistical validation and calibration of computer models." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39478.

Full text
Abstract:
This thesis deals with modeling, validation and calibration problems in experiments of computer models. Computer models are mathematic representations of real systems developed for understanding and investigating the systems. Before a computer model is used, it often needs to be validated by comparing the computer outputs with physical observations and calibrated by adjusting internal model parameters in order to improve the agreement between the computer outputs and physical observations. As computer models become more powerful and popular, the complexity of input and output data raises new computational challenges and stimulates the development of novel statistical modeling methods. One challenge is to deal with computer models with random inputs (random effects). This kind of computer models is very common in engineering applications. For example, in a thermal experiment in the Sandia National Lab (Dowding et al. 2008), the volumetric heat capacity and thermal conductivity are random input variables. If input variables are randomly sampled from particular distributions with unknown parameters, the existing methods in the literature are not directly applicable. The reason is that integration over the random variable distribution is needed for the joint likelihood and the integration cannot always be expressed in a closed form. In this research, we propose a new approach which combines the nonlinear mixed effects model and the Gaussian process model (Kriging model). Different model formulations are also studied to have an better understanding of validation and calibration activities by using the thermal problem. Another challenge comes from computer models with functional outputs. While many methods have been developed for modeling computer experiments with single response, the literature on modeling computer experiments with functional response is sketchy. Dimension reduction techniques can be used to overcome the complexity problem of function response; however, they generally involve two steps. Models are first fit at each individual setting of the input to reduce the dimensionality of the functional data. Then the estimated parameters of the models are treated as new responses, which are further modeled for prediction. Alternatively, pointwise models are first constructed at each time point and then functional curves are fit to the parameter estimates obtained from the fitted models. In this research, we first propose a functional regression model to relate functional responses to both design and time variables in one single step. Secondly, we propose a functional kriging model which uses variable selection methods by imposing a penalty function. we show that the proposed model performs better than dimension reduction based approaches and the kriging model without regularization. In addition, non-asymptotic theoretical bounds on the estimation error are presented.
APA, Harvard, Vancouver, ISO, and other styles
25

Ambachtsheer, Pamela. "Combined Use of Models and Measurements for Spatial Mapping of Concentrations and Deposition of Pollutants." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1261.

Full text
Abstract:
When modelling pollutants in the atmosphere, it is nearly impossible to get perfect results as the chemical and mechanical processes that govern pollutant concentrations are complex. Results are dependent on the quality of the meteorological input as well as the emissions inventory used to run the model. Also, models cannot currently take every process into consideration. Therefore, the model may get results that are close to, or show the general trend of the observed values, but are not perfect. However, due to the lack of observation stations, the resolution of the observational data is poor. Furthermore, the chemistry over large bodies of water is different from land chemistry, and in North America, there are no stations located over the great lakes or the ocean. Consequently, the observed values cannot accurately cover these regions. Therefore, we have combined model output and observational data when studying ozone concentrations in north eastern North America. We did this by correcting model output at observational sites with local data. We then interpolated those corrections across the model grid, using a Kriging procedure, to produce results that have the resolution of model results with the local accuracy of the observed values. Results showed that the corrected model output is much improved over either model results or observed values alone. This improvement was observed both for sites that were used in the correction process as well as sites that were omitted from the correction process.
APA, Harvard, Vancouver, ISO, and other styles
26

Höfler, Veit, Christine Wessollek, and Pierre Karrasch. "Modelling prehistoric terrain Models using LiDAR-data: A geomorphological approach." SPIE, 2015. https://tud.qucosa.de/id/qucosa%3A35056.

Full text
Abstract:
Terrain surfaces conserve human activities in terms of textures and structures. With reference to archaeological questions, the geological archive is investigated by means of models regarding anthropogenic traces. In doing so, the high-resolution digital terrain model is of inestimable value for the decoding of the archive. The evaluation of these terrain models and the reconstruction of historical surfaces is still a challenging issue. Due to the data collection by means of LiDAR systems (light detection and ranging) and despite their subsequent pre-processing and filtering, recently anthropogenic artefacts are still present in the digital terrain model. Analysis have shown that elements, such as contour lines and channels, can well be extracted from a highresolution digital terrain model. This way, channels in settlement areas show a clear anthropogenic character. This fact can also be observed for contour lines. Some contour lines representing a possibly natural ground surface and avoid anthropogenic artefacts. Comparable to channels, noticeable patterns of contour lines become visible in areas with anthropogenic artefacts. The presented work ow uses functionalities of ArcGIS and the programming language R.¹ The method starts with the extraction of contour lines from the digital terrain model. Through macroscopic analyses based on geomorphological expert knowledge, contour lines are selected representing the natural geomorphological character of the surface. In a first step, points are determined along each contour line in regular intervals. This points and the corresponding height information which is taken from an original digital terrain model is saved as a point cloud. Using the programme library gstat, a variographic analysis and the use of a Kriging-procedure based on this follow. The result is a digital terrain model filtered considering geomorphological expert knowledge showing no human degradation in terms of artefacts, preserving the landscape-genetic character and can be called a prehistoric terrain model.
APA, Harvard, Vancouver, ISO, and other styles
27

Ackerman-Alexeeff, Stacey Elizabeth. "Measurement error in environmental exposures: Statistical implications for spatial air pollution models and gene environment interaction tests." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11077.

Full text
Abstract:
Measurement error is an important issue in studies of environmental epidemiology. We considered the effects of measurement error in environmental covariates in several important settings affecting current public health research. Throughout this dissertation, we investigate the impacts of measurement error and consider statistical methodology to fix that error.
APA, Harvard, Vancouver, ISO, and other styles
28

Tresidder, Esmond. "Accelerated optimisation methods for low-carbon building design." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10512.

Full text
Abstract:
This thesis presents an analysis of the performance of optimisation using Kriging surrogate models on low-carbon building design problems. Their performance is compared with established genetic algorithms operating without a surrogate on a range of different types of building-design problems. The advantages and disadvantages of a Kriging approach, and their particular relevance to low-carbon building design optimisation, are tested and discussed. Scenarios in which Kriging methods are most likely to be of use, and scenarios where, conversely, they may be dis- advantageous compared to other methods for reducing the computational cost of optimisation, such as parallel computing, are highlighted. Kriging is shown to be able, in some cases, to find designs of comparable performance in fewer main-model evaluations than a stand-alone genetic algorithm method. However, this improvement is not robust, and in several cases Kriging required many more main-model evaluations to find comparable designs, especially in the case of design problems with discrete variables, which are common in low-carbon building design. Furthermore, limitations regarding the extent to which Kriging optimisa- tions can be accelerated using parallel computing resources mean that, even in the scenarios in which Kriging showed the greatest advantage, a stand-alone genetic algorithm implemented in parallel would be likely to find comparable designs more quickly. In light of this it is recommended that, for most lowcarbon building design problems, a stand-alone genetic algorithm is the most suitable optimisation method. Two novel methods are developed to improve the performance of optimisation algorithms on low-carbon building design problems. The first takes advantage of variables whose impact can be quickly calculated without re-running an expensive dynamic simulation, in order to dramatically increase the number of designs that can be explored within a given computing budget. The second takes advantage of objectives that can be !Keywords To Be Included For Additional Search Power: Optimisation, optimization, Kriging, meta-models, metamodels, low-energy design ! "2 calculated without a dynamic simulation in order to filter out designs that do not meet constraints in those objectives and focus the use of computationally expensive dynamic simulations on feasible designs. Both of these methods show significant improvement over standard methods in terms of the quality of designs found within a given dynamic-simulation budget.
APA, Harvard, Vancouver, ISO, and other styles
29

Sarmah, Dipsikha. "Evaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1350048688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ba, Shan. "Multi-layer designs and composite gaussian process models with engineering applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44751.

Full text
Abstract:
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
APA, Harvard, Vancouver, ISO, and other styles
31

Kleisner, Kristin Marie. "A Spatio-Temporal Analysis of Dolphinfish; Coryphaena hippurus, Abundance in the Western Atlantic: Implications for Stock Assessment of a Data-Limited Pelagic Resource." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_dissertations/137.

Full text
Abstract:
Dolphinfish (Coryphaena hippurus) is a pelagic species that is ecologically and commercially important in the western Atlantic region. This species has been linked to dominant oceanographic features such as sea surface temperature (SST) frontal regions. This work first explored the linkages between the catch rates of dolphinfish and the oceanography (satellite-derived SST, distance to front calculations, bottom depth and hook depth) using Principal Components Analysis (PCA). It was demonstrated that higher catch rates are found in relation to warmer SST and nearer to frontal regions. This environmental information was then included in standardizations of catch-per-unit-effort (CPUE) indices. It was found that including the satellite-derived SST and distance to front increases the confidence in the index. The second part of this work focused on addressing spatial variability in the catch rate data for a subsection of the sampling area: the Gulf of Mexico region. This study used geostatistical techniques to model and predict spatial abundances of two pelagic species with different habitat utilization patterns: dolphinfish (Coryphaena hippurus) and swordfish (Xiphias gladius). We partitioned catch rates into two components, the probability of encounter, and the abundance, given a positive encounter. We obtained separate variograms and kriged predictions for each component and combined them to give a single density estimate with corresponding variance. By using this two stage approach we were able to detect patterns of spatial autocorrelation that had distinct differences between the two species, likely due to differences in vertical habitat utilization. The patchy distribution of many living resources necessitates a two-stage variogram modeling and prediction process where the probability of encounter and the positive observations are modeled and predicted separately. Such a "geostatistical delta-lognormal" approach to modeling spatial autocorrelation has distinct advantages in allowing the probability of encounter and the abundance, given an encounter to possess separate patterns of autocorrelation and in modeling of severely non-normally distributed data that is plagued by zeros.
APA, Harvard, Vancouver, ISO, and other styles
32

Pereira, Taiguã Corrêa. "O desconhecido do pouco conhecido : padrão espacial de riqueza e lacunas de conhecimento em plantas (Fabales: Fabaceae) na caatinga." Universidade Federal de Sergipe, 2016. https://ri.ufs.br/handle/riufs/4475.

Full text
Abstract:
The biodiversity is distributed heterogeneously across the Earth. Although the discussion about which factors determine the spatial patterns of species diversity remains controversial, to know the components of biodiversity themselves is a challenge even bigger in certain regions. So, to know how much still remains to be studied or discovered is fundamental to the science, and the lack of knowledge about the species geographical distribution is known as one of the main problems faced in biodiversity research, especially in “Megadiverse” countries like Brazil. Historically, the Caatinga biome has been recognized as one of the most unknown and less valued according to its biodiversity, because the erroneous idea that the biome would have low diversity and endemism rates, and high degrees of degradation. Considering the dominance of the Family Fabaceae in the Caatinga, in both richness and abundance, we investigated the spatial pattern of Fabaceae species richness on the biome looking for determine which are the factors responsible for the spatial variation on its species richness. Moreover, we elaborated a spatial statistical model for the diversity of Fabaceae in the Caatinga, utilizing the spatial structure of the know assemblages and their environmental determinants, in order to estimate the shortfall of knowledge about the distribution (Wallacean shortfall) of the family in the Caatinga. We obtained 220,781 registers, less than 25% were valid. From these registers, we found 1,310 species in 198 genera. The predict richness vary from 92 to 283 species across the space and was better described by the sampling effort, the soil properties and the topography. With the measure of discrepancy between predicted and the observed values of species richness, we estimated the Wallacean shortfall, reaching 192 species in one single locality. The total number of species found in this work represents an expressive improvement on the know species richness of the family in the Caatinga. The selection of non-climatic factors as the main predictors of richness indicate the major influence of topography and soil on regional scale. The importance of substrate on the establishment of plant communities on the semiarid, as well. The estimated Wallacean shortfall evidences a chronical and spatially heterogeneous deficiency on knowledge of the regional flora. The persistence of such expressive gaps on the knowledge, plus the reduced coverage of protected areas on the biome shows a currently risk of significantly losses of biological diversity, with serious implications for the conservation of the biome.
A biodiversidade é distribuída de forma heterogênea através do planeta Terra. Embora a discussão sobre quais fatores determinam os padrões espaciais da biodiversidade continue controversa, o simples conhecimento dos seus componentes é um desafio ainda maior em algumas regiões. Assim, conhecer o quanto ainda há para ser estudado ou descoberto é fundamental para a ciência, e a falta de conhecimento sobre a distribuição geográfica das espécies é considerado um dos principais problemas enfrentados em estudos sobre a biodiversidade, especialmente em países “megadiversos” como o Brasil. O Bioma Caatinga tem sido historicamente reconhecido com um dos menos conhecidos e valorizados quanto a diversidade biológica, devido à ideia equivocada de sua baixa diversidade e endemismo e elevado grau de antropização. Considerando a dominância da família Fabaceae na Caatinga, quanto à riqueza e abundância regional, investigamos o padrão espacial da riqueza de espécies de Fabaceae no bioma, buscando determinar quais os fatores ambientais responsáveis pela variação espacial da sua riqueza de espécies. Além disso, elaboramos um modelo estatístico espacial de diversidade de Fabaceae na Caatinga a partir da estrutura espacial das assembleias conhecidas e dos seus determinantes ambientais, a fim de estimar o déficit de conhecimento sobre a distribuição (déficit wallaceano) da Família na Caatinga. Obtivemos 220.781 registros, dos quais menos de 25% foram válidos. A partir desses registros, encontramos o total de 1.310 espécies de 198 gêneros. A riqueza predita pelo modelo espacial variou de 92 a 283 espécies ao longo do espaço e foi melhor descrita pelo esforço amostral, aspectos do solo e topografia. A partir da medida de discrepância entre valores preditos e observados de riqueza de espécies, estimamos valores de déficit Wallaceano, chegando a 192 espécies em uma única localidade. O número total de espécies encontrado neste trabalho representa um incremento expressivo na riqueza conhecida de espécies da família na Caatinga. A seleção de fatores não climático como principais preditores de riqueza indica maior influência da topografia e do solo na escala regional. E também a importância do substrato no estabelecimento de comunidades vegetais no semiárido. O déficit Wallaceano estimado evidencia uma deficiência crônica e espacialmente heterogênea no conhecimento da flora regional. A persistência de lacunas tão expressivas no conhecimento, somada a cobertura reduzida de áreas protegidas no Bioma evidencia um risco corrente de perdas significativas de diversidade biológica com sérias implicações para a conservação do Bioma.
APA, Harvard, Vancouver, ISO, and other styles
33

Nachar, Stéphane. "Optimisation de structures viscoplastiques par couplage entre métamodèle multi-fidélité et modèles réduits." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN051/document.

Full text
Abstract:
Les phases de conception et de validation de pièces mécaniques nécessitent des outils de calculs rapides et fiables, permettant de faire des choix technologiques en un temps court. Dans ce cadre, il n'est pas possible de calculer la réponse exacte pour l'ensemble des configurations envisageables. Les métamodèles sont alors couramment utilisés mais nécessitent un grand nombre de réponses, notamment dans le cas où celles-ci sont non-linéaires. Une solution est alors d'exploiter plusieurs sources de données de qualité diverses pour générer un métamodèle multi-fidélité plus rapide à calculer pour une précision équivalente. Ces données multi-fidélité peuvent être extraites de modèles réduits.Les travaux présentés proposent une méthode de génération de métamodèles multi-fidélité pour l'optimisation de structures mécaniques par la mise en place d'une stratégie d'enrichissement adaptatif des informations sur la réponse de la structure, par utilisation de données issues d'un solveur LATIN-PGD permettant de générer des données de qualités adaptées, et d'accélérer le calcul par la réutilisation des données précédemment calculées. Un grand nombre de données basse-fidélité sont calculées avant un enrichissement intelligent par des données haute-fidélité.Ce manuscrit présente les contributions aux métamodèles multi-fidélité et deux approches de la méthode LATIN-PGD avec la mise en place d'une stratégie multi-paramétrique pour le réemploi des données précédemment calculées. Une implémentation parallèle des méthodes a permis de tester la méthode sur trois cas-tests, pour des gains pouvant aller jusqu'à 37x
Engineering simulation provides the best design products by allowing many design options to be quickly explored and tested, but fast-time-to-results requirement remains a critical factor to meet aggressive time-to-market requirements. In this context, using high-fidelity direct resolution solver is not suitable for (virtual) charts generation for engineering design and optimization.Metamodels are commonly considered to explore design options without computing every possibility, but if the behavior is nonlinear, a large amount of data is still required. A possibility is to use further data sources to generate a multi-fidelity surrogate model by using model reduction. Model reduction techniques constitute one of the tools to bypass the limited calculation budget by seeking a solution to a problem on a reduced order basis (ROB).The purpose of the present work is an online method for generating a multi-fidelity metamodel nourished by calculating the quantity of interest from the basis generated on-the-fly with the LATIN-PGD framework for elasto-viscoplastic problems. Low-fidelity fields are obtained by stopping the solver before convergence, and high-fidelity information is obtained with converged solution. In addition, the solver ability to reuse information from previously calculated PGD basis is exploited.This manuscript presents the contributions to multi-fidelity metamodels and the LATIN-PGD method with the implementation of a multi-parametric strategy. This coupling strategy was tested on three test cases for calculation time savings of more than 37x
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Hyung-Jin. "Regional forecasting of hydrologic parameters." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178223662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Weighman, Kristi Kay. "Mapping dynamic exposure: constructing GIS models of spatiotemporal heterogeneity in artificial stream systems." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1555337508685485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Kang, Lei. "Reduced-Dimension Hierarchical Statistical Models for Spatial and Spatio-Temporal Data." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259168805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Huang, Fang. "Modeling patterns of small scale spatial variation in soil." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-011106-155345/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: spatial variations; nested random effects models; semivariogram models; kriging methods; multiple logistic regression models; missing; multiple imputation. Includes bibliographical references (p. 35-36).
APA, Harvard, Vancouver, ISO, and other styles
39

Lawal, Najib. "Modelling and multivariate data analysis of agricultural systems." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/modelling-and-multivariate-data-analysis-of-agricultural-systems(f6b86e69-5cff-4ffb-a696-418662ecd694).html.

Full text
Abstract:
The broader research area investigated during this programme was conceived from a goal to contribute towards solving the challenge of food security in the 21st century through the reduction of crop loss and minimisation of fungicide use. This is aimed to be achieved through the introduction of an empirical approach to agricultural disease monitoring. In line with this, the SYIELD project, initiated by a consortium involving University of Manchester and Syngenta, among others, proposed a novel biosensor design that can electrochemically detect viable airborne pathogens by exploiting the biology of plant-pathogen interaction. This approach offers improvement on the inefficient and largely experimental methods currently used. Within this context, this PhD focused on the adoption of multidisciplinary methods to address three key objectives that are central to the success of the SYIELD project: local spore ingress near canopies, the evaluation of a suitable model that can describe spore transport, and multivariate analysis of the potential monitoring network built from these biosensors. The local transport of spores was first investigated by carrying out a field trial experiment at Rothamsted Research UK in order to investigate spore ingress in OSR canopies, generate reliable data for testing the prototype biosensor, and evaluate a trajectory model. During the experiment, spores were air-sampled and quantified using established manual detection methods. Results showed that the manual methods, such as colourimetric detection are more sensitive than the proposed biosensor, suggesting the proxy measurement mechanism used by the biosensor may not be reliable in live deployments where spores are likely to be contaminated by impurities and other inhibitors of oxalic acid production. Spores quantified using the more reliable quantitative Polymerase Chain Reaction proved informative and provided novel of data of high experimental value. The dispersal of this data was found to fit a power decay law, a finding that is consistent with experiments in other crops. In the second area investigated, a 3D backward Lagrangian Stochastic model was parameterised and evaluated with the field trial data. The bLS model, parameterised with Monin-Obukhov Similarity Theory (MOST) variables showed good agreement with experimental data and compared favourably in terms of performance statistics with a recent application of an LS model in a maize canopy. Results obtained from the model were found to be more accurate above the canopy than below it. This was attributed to a higher error during initialisation of release velocities below the canopy. Overall, the bLS model performed well and demonstrated suitability for adoption in estimating above-canopy spore concentration profiles which can further be used for designing efficient deployment strategies. The final area of focus was the monitoring of a potential biosensor network. A novel framework based on Multivariate Statistical Process Control concepts was proposed and applied to data from a pollution-monitoring network. The main limitation of traditional MSPC in spatial data applications was identified as a lack of spatial awareness by the PCA model when considering correlation breakdowns caused by an incoming erroneous observation. This resulted in misclassification of healthy measurements as erroneous. The proposed Kriging-augmented MSPC approach was able to incorporate this capability and significantly reduce the number of false alarms.
APA, Harvard, Vancouver, ISO, and other styles
40

Bilicz, Sandor. "Application of Design-of-Experiment Methods and Surrogate Models in Electromagnetic Nondestructive Evaluation." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00601753.

Full text
Abstract:
Le contrôle non destructif électromagnétique (CNDE) est appliqué dans des domaines variés pour l'exploration de défauts cachés affectant des structures. De façon générale, le principe peut se poser en ces termes : un objet inconnu perturbe un milieu hôte donné et illuminé par un signal électromagnétique connu, et la réponse est mesurée sur un ou plusieurs récepteurs de positions connues. Cette réponse contient des informations sur les paramètres électromagnétiques et géométriques des objets recherchés et toute la difficulté du problème traité ici consiste à extraire ces informations du signal obtenu. Plus connu sous le nom de " problèmes inverses ", ces travaux s'appuient sur une résolution appropriée des équations de Maxwell. Au " problème inverse " est souvent associé le " problème direct " complémentaire, qui consiste à déterminer le champ électromagnétique perturbé connaissant l'ensemble des paramètres géométriques et électromagnétiques de la configuration, défaut inclus. En pratique, cela est effectué via une modélisation mathématique et des méthodes numériques permettant la résolution numérique de tels problèmes. Les simulateurs correspondants sont capables de fournir une grande précision sur les résultats mais à un coût numérique important. Sachant que la résolution d'un problème inverse exige souvent un grand nombre de résolution de problèmes directs successifs, cela rend l'inversion très exigeante en termes de temps de calcul et de ressources informatiques. Pour surmonter ces challenges, les " modèles de substitution " qui imitent le modèle exact peuvent être une solution alternative intéressante. Une manière de construire de tels modèles de substitution est d'effectuer un certain nombre de simulations exactes et puis d'approximer le modèle en se basant sur les données obtenues. Le choix des simulations (" prototypes ") est normalement contrôlé par une stratégie tirée des outils de méthodes de " plans d'expérience numérique ". Dans cette thèse, l'utilisation des techniques de modélisation de substitution et de plans d'expérience numérique dans le cadre d'applications en CNDE est examinée. Trois approches indépendantes sont présentées en détail : une méthode d'inversion basée sur l'optimisation d'une fonction objectif et deux approches plus générales pour construire des modèles de substitution en utilisant des échantillonnages adaptatifs. Les approches proposées dans le cadre de cette thèse sont appliquées sur des exemples en CNDE par courants de Foucault
APA, Harvard, Vancouver, ISO, and other styles
41

Kroetz, Henrique Machado. "Meta-modelagem em confiabilidade estrutural." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-08042015-162956/.

Full text
Abstract:
A aplicação de simulações numéricas em problemas de confiabilidade estrutural costuma estar associada a grandes custos computacionais, dada a pequena probabilidade de falha inerente às estruturas. Ainda que diversos casos possam ser endereçados através de técnicas de redução da variância das amostras, a solução de problemas envolvendo grande número de graus de liberdade, respostas dinâmicas, não lineares e problemas de otimização na presença de incertezas são comumente ainda inviáveis de se resolver por esta abordagem. Tais problemas, porém, podem ser resolvidos através de representações analíticas que aproximam a resposta que seria obtida com a utilização de modelos computacionais mais complexos, chamadas chamados meta-modelos. O presente trabalho trata da compilação, assimilação, programação em computador e comparação de técnicas modernas de meta-modelagem no contexto da confiabilidade estrutural, utilizando representações construídas a partir de redes neurais artificiais, expansões em polinômios de caos e através de krigagem. Estas técnicas foram implementadas no programa computacional StRAnD - Structural Reliability Analysis and Design, desenvolvido junto ao Departamento de Engenharia de Estruturas, USP, resultando assim em um benefício permanente para a análise de confiabilidade estrutural junto à Universidade de São Paulo.
The application of numerical simulations to structural reliability problems is often associated with high computational costs, given the small probability of failure inherent to the structures. Although many cases can be addressed using variance reduction techniques, solving problems involving large number of degrees of freedom, nonlinear and dynamic responses, and problems of optimization in the presence of uncertainties are sometimes still infeasible to solve by this approach. Such problems, however, can be solved by analytical representations that approximate the response that would be obtained with the use of more complex computational models, called meta-models. This work deals with the collection, assimilation, computer programming and comparison of modern meta-modeling techniques in the context of structural reliability, using representations constructed from artificial neural networks, polynomial chaos expansions and Kriging. These techniques are implemented in the computer program StRAnD - Structural Reliability Analysis and Design, developed at the Department of Structural Engineering, USP; thus resulting in a permanent benefit to structural reliability analysis at the University of São Paulo.
APA, Harvard, Vancouver, ISO, and other styles
42

Falk, Matthew Gregory. "Incorporating uncertainty in environmental models informed by imagery." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/33235/1/Matthew_Falk_Thesis.pdf.

Full text
Abstract:
In this thesis, the issue of incorporating uncertainty for environmental modelling informed by imagery is explored by considering uncertainty in deterministic modelling, measurement uncertainty and uncertainty in image composition. Incorporating uncertainty in deterministic modelling is extended for use with imagery using the Bayesian melding approach. In the application presented, slope steepness is shown to be the main contributor to total uncertainty in the Revised Universal Soil Loss Equation. A spatial sampling procedure is also proposed to assist in implementing Bayesian melding given the increased data size with models informed by imagery. Measurement error models are another approach to incorporating uncertainty when data is informed by imagery. These models for measurement uncertainty, considered in a Bayesian conditional independence framework, are applied to ecological data generated from imagery. The models are shown to be appropriate and useful in certain situations. Measurement uncertainty is also considered in the context of change detection when two images are not co-registered. An approach for detecting change in two successive images is proposed that is not affected by registration. The procedure uses the Kolmogorov-Smirnov test on homogeneous segments of an image to detect change, with the homogeneous segments determined using a Bayesian mixture model of pixel values. Using the mixture model to segment an image also allows for uncertainty in the composition of an image. This thesis concludes by comparing several different Bayesian image segmentation approaches that allow for uncertainty regarding the allocation of pixels to different ground components. Each segmentation approach is applied to a data set of chlorophyll values and shown to have different benefits and drawbacks depending on the aims of the analysis.
APA, Harvard, Vancouver, ISO, and other styles
43

Thenon, Arthur. "Utilisation de méta-modèles multi-fidélité pour l'optimisation de la production des réservoirs." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066100/document.

Full text
Abstract:
Les simulations d'écoulement sur des modèles représentatifs d'un gisement pétrolier sont généralement coûteuses en temps de calcul. Une pratique courante en ingénierie de réservoir consiste à remplacer ces simulations par une approximation mathématique, un méta-modèle. La méta-modélisation peut fortement réduire le nombre de simulations nécessaires à l'analyse de sensibilité, le calibrage du modèle, l'estimation de la production, puis son optimisation. Cette thèse porte sur l'étude de méta-modèles utilisant des simulations réalisées à différents niveaux de précision, par exemple pour des modèles de réservoir avec des maillages de résolutions différentes. L'objectif est d'accélérer la construction d'un méta-modèle prédictif en combinant des simulations coûteuses avec des simulations rapides mais moins précises. Ces méta-modèles multi-fidélité, basés sur le co-krigeage, sont comparés au krigeage pour l'approximation de sorties de la simulation d'écoulement. Une analyse en composantes principales peut être considérée afin de réduire le nombre de modèles de krigeage pour la méta-modélisation de réponses dynamiques et de cartes de propriétés. Cette méthode peut aussi être utilisée pour améliorer la méta-modélisation de la fonction objectif dans le cadre du calage d'historique. Des algorithmes de planification séquentielle d'expériences sont finalement proposés pour accélérer la méta-modélisation et tirer profit d'une approche multi-fidélité. Les différentes méthodes introduites sont testées sur deux cas synthétiques inspirés des benchmarks PUNQ-S3 et Brugge
Performing flow simulations on numerical models representative of oil deposits is usually a time consuming task in reservoir engineering. The substitution of a meta-model, a mathematical approximation, for the flow simulator is thus a common practice to reduce the number of calls to the flow simulator. It permits to consider applications such as sensitivity analysis, history-matching, production estimation and optimization. This thesis is about the study of meta-models able to integrate simulations performed at different levels of accuracy, for instance on reservoir models with various grid resolutions. The goal is to speed up the building of a predictive meta-model by balancing few expensive but accurate simulations, with numerous cheap but approximated ones. Multi-fidelity meta-models, based on co-kriging, are thus compared to kriging meta-models for approximating different flow simulation outputs. To deal with vectorial outputs without building a meta-model for each component of the vector, the outputs can be split on a reduced basis using principal component analysis. Only a few meta-models are then needed to approximate the main coefficients in the new basis. An extension of this approach to the multi-fidelity context is proposed. In addition, it can provide an efficient meta-modelling of the objective function when used to approximate each production response involved in the objective function definition. The proposed methods are tested on two synthetic cases derived from the PUNQ-S3 and Brugge benchmark cases. Finally, sequential design algorithms are introduced to speed-up the meta-modeling process and exploit the multi-fidelity approach
APA, Harvard, Vancouver, ISO, and other styles
44

Canaud, Matthieu. "Estimation de paramètres et planification d’expériences adaptée aux problèmes de cinétique - Application à la dépollution des fumées en sortie des moteurs." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0619/document.

Full text
Abstract:
Les modèles physico-chimiques destinés à représenter la réalité expérimentale peuvent se révéler inadéquats. C'est le cas du piège à oxyde d'azote, utilisé comme support applicatif de notre thèse, qui est un système catalytique traitant les émissions polluantes du moteur Diesel. Les sorties sont des courbes de concentrations des polluants, qui sont des données fonctionnelles, dépendant de concentrations initiales scalaires.L'objectif initial de cette thèse est de proposer des plans d'expériences ayant un sens pour l'utilisateur. Cependant les plans d'expérience s'appuyant sur des modèles, l'essentiel du travail a conduit à proposer une représentation statistique tenant compte des connaissances des experts, et qui permette de construire ce plan.Trois axes de recherches ont été explorés. Nous avons d'abord considéré une modélisation non fonctionnelle avec le recours à la théorie du krigeage. Puis, nous avons pris en compte la dimension fonctionnelle des réponses, avec l'application et l'extension des modèles à coefficients variables. Enfin en repartant du modèle initial, nous avons fait dépendre les paramètres cinétiques des entrées (scalaires) à l'aide d'une représentation non paramétrique.Afin de comparer les méthodes, il a été nécessaire de mener une campagne expérimentale, et nous proposons une démarche de plan exploratoire, basée sur l’entropie maximale
Physico-chemical models designed to represent experimental reality may prove to be inadequate. This is the case of nitrogen oxide trap, used as an application support of our thesis, which is a catalyst system treating the emissions of the diesel engine. The outputs are the curves of concentrations of pollutants, which are functional data, depending on scalar initial concentrations.The initial objective of this thesis is to propose experiental design that are meaningful to the user. However, the experimental design relying on models, most of the work has led us to propose a statistical representation taking into account the expert knowledge, and allows to build this plan.Three lines of research were explored. We first considered a non-functional modeling with the use of kriging theory. Then, we took into account the functional dimension of the responses, with the application and extension of varying coefficent models. Finally, starting again from the original model, we developped a model depending on the kinetic parameters of the inputs (scalar) using a nonparametric representation.To compare the methods, it was necessary to conduct an experimental campaign, and we propose an exploratory design approach, based on maximum entropy
APA, Harvard, Vancouver, ISO, and other styles
45

Durrande, Nicolas. "Étude de classes de noyaux adaptées à la simplification et à l’interprétation des modèles d’approximation. Une approche fonctionnelle et probabiliste." Thesis, Saint-Etienne, EMSE, 2011. http://www.theses.fr/2011EMSE0631/document.

Full text
Abstract:
Le thème général de cette thèse est celui de la construction de modèles permettantd’approximer une fonction f lorsque la valeur de f(x) est connue pour un certainnombre de points x. Les modèles considérés ici, souvent appelés modèles de krigeage,peuvent être abordés suivant deux points de vue : celui de l’approximation dans les espacesde Hilbert à noyaux reproduisants ou celui du conditionnement de processus gaussiens.Lorsque l’on souhaite modéliser une fonction dépendant d’une dizaine de variables, lenombre de points nécessaires pour la construction du modèle devient très important etles modèles obtenus sont difficilement interprétables. A partir de ce constat, nous avonscherché à construire des modèles simplifié en travaillant sur un objet clef des modèles dekrigeage : le noyau. Plus précisement, les approches suivantes sont étudiées : l’utilisation denoyaux additifs pour la construction de modèles additifs et la décomposition des noyauxusuels en sous-noyaux pour la construction de modèles parcimonieux. Pour finir, nousproposons une classe de noyaux qui est naturellement adaptée à la représentation ANOVAdes modèles associés et à l’analyse de sensibilité globale
The framework of this thesis is the approximation of functions for which thevalue is known at limited number of points. More precisely, we consider here the so-calledkriging models from two points of view : the approximation in reproducing kernel Hilbertspaces and the Gaussian Process regression.When the function to approximate depends on many variables, the required numberof points can become very large and the interpretation of the obtained models remainsdifficult because the model is still a high-dimensional function. In light of those remarks,the main part of our work adresses the issue of simplified models by studying a key conceptof kriging models, the kernel. More precisely, the following aspects are adressed: additivekernels for additive models and kernel decomposition for sparse modeling. Finally, wepropose a class of kernels that is well suited for functional ANOVA representation andglobal sensitivity analysis
APA, Harvard, Vancouver, ISO, and other styles
46

Guerra, Jonathan. "Optimisation multi-objectif sous incertitudes de phénomènes de thermique transitoire." Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0024/document.

Full text
Abstract:
L'objectif de cette thèse est la résolution d’un problème d’optimisation multi-objectif sous incertitudes en présence de simulations numériques coûteuses. Une validation est menée sur un cas test de thermique transitoire. Dans un premier temps, nous développons un algorithme d'optimisation multi-objectif basé sur le krigeage nécessitant peu d’appels aux fonctions objectif. L'approche est adaptée au calcul distribué et favorise la restitution d'une approximation régulière du front de Pareto complet. Le problème d’optimisation sous incertitudes est ensuite étudié en considérant des mesures de robustesse pires cas et probabilistes. Le superquantile intègre tous les évènements pour lesquels la valeur de la sortie se trouve entre le quantile et le pire cas mais cette mesure de risque nécessite un grand nombre d’appels à la fonction objectif incertaine pour atteindre une précision suffisante. Peu de méthodes permettent de calculer le superquantile de la distribution de la sortie de fonctions coûteuses. Nous développons donc un estimateur du superquantile basé sur une méthode d'échantillonnage préférentiel et le krigeage. Il permet d’approcher les superquantiles avec une faible erreur et une taille d’échantillon limitée. De plus, un couplage avec l’algorithme multi-objectif permet la réutilisation des évaluations. Dans une dernière partie, nous construisons des modèles de substitution spatio-temporels capables de prédire des phénomènes dynamiques non linéaires sur des temps longs et avec peu de trajectoires d’apprentissage. Les réseaux de neurones récurrents sont utilisés et une méthodologie de construction facilitant l’apprentissage est mise en place
This work aims at solving multi-objective optimization problems in the presence of uncertainties and costly numerical simulations. A validation is carried out on a transient thermal test case. First of all, we develop a multi-objective optimization algorithm based on kriging and requiring few calls to the objective functions. This approach is adapted to the distribution of the computations and favors the restitution of a regular approximation of the complete Pareto front. The optimization problem under uncertainties is then studied by considering the worst-case and probabilistic robustness measures. The superquantile integrates every event on which the output value is between the quantile and the worst case. However, it requires an important number of calls to the uncertain objective function to be accurately evaluated. Few methods give the possibility to approach the superquantile of the output distribution of costly functions. To this end, we have developed an estimator based on importance sampling and kriging. It enables to approach superquantiles with little error and using a limited number of samples. Moreover, the setting up of a coupling with the multi-objective algorithm allows to reuse some of those evaluations. In the last part, we build spatio-temporal surrogate models capable of predicting non-linear, dynamic and long-term in time phenomena by using few learning trajectories. The construction is based on recurrent neural networks and a construction facilitating the learning is proposed
APA, Harvard, Vancouver, ISO, and other styles
47

Mosquera, Meza Rolando. "Interpolation sur les variétés grassmanniennes et applications à la réduction de modèles en mécanique." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS008/document.

Full text
Abstract:
Ce mémoire de thèse concerne l'interpolation sur les variétés de Grassmann et ses applications à la réduction de modèles en mécanique et plus généralement aux systèmes d'équations aux dérivées partielles d'évolution. Après une description de la méthode POD, nous introduisons les fondements théoriques en géométrie des variétés de Grassmann, qui seront utilisés dans le reste de la thèse. Ce chapitre donne à ce mémoire à la fois une rigueur mathématique au niveau des algorithmes mis au point, leur domaine de validité ainsi qu'une estimation de l'erreur en distance grassmannienne, mais également un caractère auto-contenu "self-contained" du manuscrit. Ensuite, on présente la méthode d'interpolation sur les variétés de Grassmann introduite par David Amsallem et Charbel Farhat. Cette méthode sera le point de départ des méthodes d'interpolation que nous développerons dans les chapitres suivants. La méthode de Amsallem-Farhat consiste à choisir un point d'interpolation de référence, envoyer l'ensemble des points d'interpolation sur l'espace tangent en ce point de référence via l'application logarithme géodésique, effectuer une interpolation classique sur cet espace tangent, puis revenir à la variété de Grassmann via l'application exponentielle géodésique. On met en évidence par des essais numériques l'influence du point de référence sur la qualité des résultats. Dans notre premier travail, nous présentons une version grassmannienne d'un algorithme connu dans la littérature sous le nom de Pondération par Distance Inverse (IDW). Dans cette méthode, l'interpolé en un point donné est considéré comme le barycentre des points d'interpolation où les coefficients de pondération utilisés sont inversement "proportionnels" à la distance entre le point considéré et les points d'interpolation. Dans notre méthode, notée IDW-G, la distance géodésique sur la variété de Grassmann remplace la distance euclidienne dans le cadre standard des espaces euclidiens. L'avantage de notre algorithme, dont on a montré la convergence sous certaines conditions assez générales, est qu'il ne requiert pas de point de référence contrairement à la méthode de Amsallem-Farhat. Pour remédier au caractère itératif (point fixe) de notre première méthode, nous proposons une version directe via la notion de barycentre généralisé. Notons enfin que notre algorithme IDW-G dépend nécessairement du choix des coefficients de pondération utilisés. Dans notre second travail, nous proposons une méthode qui permet un choix optimal des coefficients de pondération, tenant compte de l'auto-corrélation spatiale de l'ensemble des points d'interpolation. Ainsi, chaque coefficient de pondération dépend de tous les points d'interpolation et non pas seulement de la distance entre le point considéré et un point d'interpolation. Il s'agit d'une version grassmannienne de la méthode de Krigeage, très utilisée en géostatique. La méthode de Krigeage grassmannienne utilise également le point de référence. Dans notre dernier travail, nous proposons une version grassmannienne de l'algorithme de Neville qui permet de calculer le polynôme d'interpolation de Lagrange de manière récursive via l'interpolation linéaire entre deux points. La généralisation de cet algorithme sur une variété grassmannienne est basée sur l'extension de l'interpolation entre deux points (géodésique/droite) que l'on sait faire de manière explicite. Cet algorithme ne requiert pas le choix d'un point de référence, il est facile d'implémentation et très rapide. De plus, les résultats numériques obtenus sont remarquables et nettement meilleurs que tous les algorithmes décrits dans ce mémoire
This dissertation deals with interpolation on Grassmann manifolds and its applications to reduced order methods in mechanics and more generally for systems of evolution partial differential systems. After a description of the POD method, we introduce the theoretical tools of grassmannian geometry which will be used in the rest of the thesis. This chapter gives this dissertation a mathematical rigor in the performed algorithms, their validity domain, the error estimate with respect to the grassmannian distance on one hand and also a self-contained character to the manuscript. The interpolation on Grassmann manifolds method introduced by David Amsallem and Charbel Farhat is afterward presented. This method is the starting point of the interpolation methods that we will develop in this thesis. The method of Amsallem-Farhat consists in chosing a reference interpolation point, mapping forward all interpolation points on the tangent space of this reference point via the geodesic logarithm, performing a classical interpolation on this tangent space and mapping backward the interpolated point to the Grassmann manifold by the geodesic exponential function. We carry out the influence of the reference point on the quality of the results through numerical simulations. In our first work, we present a grassmannian version of the well-known Inverse Distance Weighting (IDW) algorithm. In this method, the interpolation on a point can be considered as the barycenter of the interpolation points where the used weights are inversely proportional to the distance between the considered point and the given interpolation points. In our method, denoted by IDW-G, the geodesic distance on the Grassmann manifold replaces the euclidean distance in the standard framework of euclidean spaces. The advantage of our algorithm that we show the convergence undersome general assumptions, does not require a reference point unlike the method of Amsallem-Farhat. Moreover, to carry out this, we finally proposed a direct method, thanks to the notion of generalized barycenter instead of an earlier iterative method. However, our IDW-G algorithm depends on the choice of the used weighting coefficients. The second work deals with an optimal choice of the weighting coefficients, which take into account of the spatial autocorrelation of all interpolation points. Thus, each weighting coefficient depends of all interpolation points an not only on the distance between the considered point and the interpolation point. It is a grassmannian version of the Kriging method, widely used in Geographic Information System (GIS). Our grassmannian Kriging method require also the choice of a reference point. In our last work, we develop a grassmannian version of Neville's method which allow the computation of the Lagrange interpolation polynomial in a recursive way via the linear interpolation of two points. The generalization of this algorithm to grassmannian manifolds is based on the extension of interpolation of two points (geodesic/straightline) that we can do explicitly. This algorithm does not require the choice of a reference point, it is easy to implement and very quick. Furthermore, the obtained numerical results are notable and better than all the algorithms described in this dissertation
APA, Harvard, Vancouver, ISO, and other styles
48

Barbarroux, Loïc. "Contributions à la modélisation multi-échelles de la réponse immunitaire T-CD8 : construction, analyse, simulation et calibration de modèles." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC026/document.

Full text
Abstract:
Lors de l’infection par un pathogène intracellulaire, l’organisme déclenche une réponse immunitaire spécifique dont les acteurs principaux sont les lymphocytes T-CD8. Ces cellules sont responsables de l’éradication de ce type d’infections et de la constitution du répertoire immunitaire de l’individu. Les processus qui composent la réponse immunitaire se répartissent sur plusieurs échelles physiques inter-connectées (échelle intracellulaire, échelle d’une cellule, échelle de la population de cellules). La réponse immunitaire est donc un processus complexe, pour lequel il est difficile d’observer ou de mesurer les liens entre les différents phénomènes mis en jeu. Nous proposons trois modèles mathématiques multi-échelles de la réponse immunitaire, construits avec des formalismes différents mais liés par une même idée : faire dépendre le comportement des cellules TCD8 de leur contenu intracellulaire. Pour chaque modèle, nous présentons, si possible, sa construction à partir des hypothèses biologiques sélectionnées, son étude mathématique et la capacité du modèle à reproduire la réponse immunitaire au travers de simulations numériques. Les modèles que nous proposons reproduisent qualitativement et quantitativement la réponse immunitaire T-CD8 et constituent ainsi de bons outils préliminaires pour la compréhension de ce phénomène biologique
Upon infection by an intracellular pathogen, the organism triggers a specific immune response,mainly driven by the CD8 T cells. These cells are responsible for the eradication of this type of infections and the constitution of the immune repertoire of the individual. The immune response is constituted by many processes which act over several interconnected physical scales (intracellular scale, single cell scale, cell population scale). This biological phenomenon is therefore a complex process, for which it is difficult to observe or measure the links between the different processes involved. We propose three multiscale mathematical models of the CD8 immune response, built with different formalisms but related by the same idea : to make the behavior of the CD8 T cells depend on their intracellular content. For each model, we present, if possible, its construction process based on selected biological hypothesis, its mathematical study and its ability to reproduce the immune response using numerical simulations. The models we propose succesfully reproduce qualitatively and quantitatively the CD8 immune response and thus constitute useful tools to further investigate this biological phenomenon
APA, Harvard, Vancouver, ISO, and other styles
49

Zumpe, Martin Kai. "Stabilité macroéconomique, apprentissage et politique monétaire : une approche comparative : modélisation DSGE versus modélisation multi-agents." Thesis, Bordeaux 4, 2012. http://www.theses.fr/2012BOR40022/document.

Full text
Abstract:
Cette thèse analyse le rôle de l’apprentissage dans deux cadres de modélisation distincts. Dans le cas dunouveau modèle canonique avec apprentissage adaptatif, les caractéristiques les plus marquantes des dynamiquesd’apprentissage concernent la capacité des règles de politique monétaire à assurer la convergencevers l’équilibre en anticipations rationnelles. Le mécanisme de transmission de la politique monétaire estcelui de l’effet de substitution associé au canal de la consommation. Dans le cas d’un modèle multi-agentsqui relâche des hypothèses restrictives du nouveau modèle canonique, tout en restant structurellementproche de celui-ci, les variables agrégées évoluent à bonne distance de cet équilibre, et on observe desdynamiques nettement différentes. La politique monétaire influence les variables agrégées de manièremarginale via l’effet de revenu du canal de la consommation. En présence d’un processus d’apprentissagesocial évolutionnaire, l’économie converge vers un faible niveau d’activité économique. L’introductiond’un processus caractérisé par le fait que les agents apprennent individuellement à l’aide de leurs modèlesmentaux atténue le caractère dépressif des dynamiques d’apprentissage. Ces différences entre les deuxcadres de modélisation démontrent la difficulté de généraliser les résultats du nouveau modèle canonique
This thesis analyses the role of learning in two different modelling frameworks. In the new canonicalmodel with adaptive learning, the most remarkable characteristics of the learning dynamics deal withthe capacity of monetary policy rules to guaranty convergence to the rational expectations equilibrium.The transmission mechanism of the monetary policy is based on the substitution effect associated to theconsumption channel. In the case of an agent-based model which relaxes some restrictive assumptionsof the new canonical model - but is endowed with a similar structure - aggregate variables evolve atsome distance from the rational expectations equilibrium. Monetary policy has a marginal impact onthe agregated variables via the wealth effect of the consumption channel. When agents learn accordingto an evolutionnary social learning process, the economy converges to regions of low economic activity.The introduction of a process where agents learn individually by using their mental models induces lessdepressive learning dynamics. These differences between the two modelling frameworks show that thegeneralisation of the results of the new canonical model is not easy to achieve
APA, Harvard, Vancouver, ISO, and other styles
50

Jian, Yi Ru, and 簡宜如. "Application of kriging and cokriging on predicting spatial variability of soil properties." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/50688874112080000646.

Full text
Abstract:
碩士
國立臺灣大學
農業化學研究所
83
Spatial variability of soil properties was investigated by krig ing and cokriging from 132 point observartions in an area of 10 square kilometer in Changhua county. Some soil properties such as P, Ca, Mg, Fe, sand%, silt%, clay% and sum of exchangeable bases(SEB) were selected for this study. The structural analysis of the soil properties applied in the study showed that the pro-perties were moderately spatial dependent. The spatial distributions of the soil properties predicted by minimum sampling densities were all significantly correlated (p<0.1%) with those by maximum densities. The mean absolute errors and mean square of errors between actually observed and estimated values obtained by cokriging using auxiliary variables oversampled with respect to the main variable. SEB, sand%, silt% and clay% were highly inter- correlated in the study, so they could be chosen as the main or auxiliary variables. Topsoil sand%, silt% and clay% were functioned as auxiliary variables in predicting topsoil SEB, topsoil SEB and clay% as auxiliary variables in predicting subsoil SEB. The result suggested that both ordinary kriging and cokriging can be used for predicting the spatial distribution of large-scale sampled soil properties. Providing improved estimates with rela-tive reduced mean estimation error and increasing the efficiency of sampling, cokriging can be an effective technique for predicting the spatial variability of soil properties.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography