Dissertationen zum Thema „Bayesian estimate“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Bayesian estimate" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
OLIVEIRA, ANA CRISTINA BERNARDO DE. „BAYESIAN MODEL TO ESTIMATE ADVERTISING RECALL IN MARKETING“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7528@1.
Der volle Inhalt der QuelleA importância de sistemas que monitorem continuamente as resposta dos consumidores à propaganda é notadamente reconhecida pela comunidade de pesquisa de mercado. A coleta sistemática deste tipo de informação é importante porque através desta, pode-se revisar campanhas anteriores, corrigir tendências detectadas em pré-testes e melhor orientar as tomadas de decisão nos setores de propaganda. O presente trabalho contém um modelo para tentar medir esta resposta baseada em Modelos Lineares Dinâmicos Generalizados.
Analysis of consumer markets define and attempt to measure many variables in studies of the effectiveness of adversitising. The awareness in a consumer population of a particular advertising is one such quantity, the subject of the above-referenced studies. We define and give the implementation of model based in dynamic Generalised Linear Models which is used to measure this quantity.
James, Peter Welbury. „Design and analysis of studies to estimate cerebral blood flow“. Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251020.
Der volle Inhalt der QuelleRodewald, Oliver Russell. „Use of Bayesian inference to estimate diversion likelihood in a PUREX facility“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76951.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (p. 66-67).
Nuclear Fuel reprocessing is done today with the PUREX process, which has been demonstrated to work at industrial scales at several facilities around the world. Use of the PUREX process results in the creation of a stream of pure plutonium, which allows the process to be potentially used by a proliferator. Safeguards have been put in place by the IAEA and other agencies to guard against the possibility of diversion and misuse, but the cost of these safeguards and the intrusion into a facility they represent could cause a fuel reprocessing facility operator to consider foregoing standard safeguards in favor of diversion detection that is less intrusive. Use of subjective expertise in a Bayesian network offers a unique opportunity to monitor a fuel reprocessing facility while collecting limited information compared to traditional safeguards. This work focuses on the preliminary creation of a proof of concept Bayesian network and its application to a model nuclear fuel reprocessing facility.
by Oliver Russell Rodewald.
S.M.and S.B.
SOUZA, MARCUS VINICIUS PEREIRA DE. „A BAYESIAN APPROACH TO ESTIMATE THE EFFICIENT OPERATIONAL COSTS OF ELECTRICAL ENERGY UTILITIES“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12361@1.
Der volle Inhalt der QuelleEsta tese apresenta os principais resultados de medidas de eficiência dos custos operacionais de 60 distribuidoras brasileiras de energia elétrica. Baseado no esquema yardstick competition, foi utilizado uma Rede Neural d e Kohonen (KNN) para identificar grupos de empresas similares. Os resultados obtidos pela KNN não são determinísticos, visto que os pesos sinápticos da rede são inicializados aleatoriamente. Então, é realizada uma simulação de Monte Carlo para encontrar os clusters mais frequentes. As medidas foram obtidas por modelos DEA (input oriented, com e sem restrições aos pesos) e modelos Bayesianos e frequencistas de fronteira estocástica (utilizando as funções Cobb-Douglas e Translog). Em todos os modelos, DEA e SFA, a única variável input refere-se ao custo operacional (OPEX). Os índices de eficiência destes modelos representam a potencial redução destes custos de acordo com cada concessionária avaliada. Os outputs são os cost drivers da variável OPEX: número de unidades consumidoras (uma proxy da quantidade de serviço), montante de energia distribuída (uma proxy do produto total) e a extensão da rede de distribuição (uma proxy da dispersão dos consumidores na área de concessão). Finalmente, vale registrar que estas técnicas podem mitigar a assimetria de informação e aprimorar a habilidade do agente regulador em comparar os desempenhos das distribuidoras em ambientes de regulação incentivada.
This thesis presents the main results of the cost efficiency scores of 60 Brazilian electricity distribution utilities. Based on yardstick competition scheme, it was applied a Kohonen Neural Networks (KNN) to identify and to group the similar utilities. The KNN results are not deterministic, since the estimated weights are randomly initialized. Thus, a Monte Carlo simulation was used in order to find the most frequent clusters. Therefore was examined the use of the DEA methodology (input oriented, with and without weight constraints) and Bayesian and non- Bayesian Stochastic Frontier Analysis (centered on a Cobb- Douglas and Translog cost functions) to evaluate the cost efficiency scores of electricity distribution utilities. In both models the only input variable is operational cost (OPEX). The efficiency measures from these models reflect the potential of the reduction of operational costs of each utility. The outputs are the cost-drivers of the OPEX: the number of customers (a proxy for the amount of service), the total electric power supplied (a proxy for the amount of product delivered) and the distribution network size (a proxy of the customers scattering in the operating territory of each distribution utility). Finally, it is important to mention that these techniques can reduce the information assimetry to improve the regulator´s skill to compare the performance of the utilities in incentive regulation environments.
Xiao, Yuqing. „Estimate the True Pass Probability for Near-Real-Time Monitor Challenge Data Using Bayesian Analysis“. Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/20.
Der volle Inhalt der QuelleHUAMANI, LUIS ALBERTO NAVARRO. „A BAYESIAN PROCEDUCE TO ESTIMATE THE INDIVIDUAL CONTRIBUTION OF INDIVIDUAL END USES IN RESIDENCIAL ELECTRICAL ENERGY CONSUMPTION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8691@1.
Der volle Inhalt der QuelleEsta dissertação investiga a utilização do Modelo de Regressão Multivariada Seemingly Unrelated sob uma perspectiva Bayesiana, na estimação das curvas de carga dos principais eletrodomésticos. Será utilizada uma estrutura de Demanda Condicional (CDA), consideradas de especial interesse no setor comercial e residencial para o gerenciamento pelo lado da demanda (Demand Side Management) dos hábitos dos consumidores residenciais. O trabalho envolve três partes principais: uma apresentação das metodologias estatísticas clássicas usadas para estimar as curvas de cargas; um estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated usando uma aproximação Bayesiana. E por último o desenvolvimento do modelo num estudo de caso. Na apresentação das metodologias clássicas fez-se um levantamento preliminar da estrutura CDA para casos univariados usando Regressão Múltipla, e multivariada usando Regressão Multivariada Seemingly Unrelated, onde o desempenho desta estrutura depende da estrutura de correlação entre os erros de consumo horário durante um dia específico; assim como as metodologias usadas para estimar as curvas de cargas. No estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated a partir da abordagem Bayesiana considerou-se um fator importante no desempenho da metodologia de estimação, a saber: informação a priori. No desenvolvimento do modelo, foram estimadas as curvas de cargas dos principais eletrodomésticos numa abordagem Bayesiana mostrando o desempenho da metodologia na captura de ambos tipos de informação: estimativas de engenharia e estimativas CDA. Os resultados obtidos avaliados pelo método acima comprovaram superioridade na explicação de dados em relação aos modelos clássicos.
The present dissertation investigates the use of multivariate regression models from a Bayesian point of view. These models were used to estimate the electric load behavior of household end uses. A conditional demand structure was used considering its application to the demand management of the residential and commercial consumers. This work is divided in three main parts: a description of the classical statistical methodologies used for the electric load prediction, a study of the multivariate regression models using a Bayesian approach and a further development of the model applied to a case study. A preliminary revision of the CDA structure was done for univariate cases using multiple regression. A similar revision was done for other cases using multivariate regression (Seemingly Unrelated). In those cases, the behavior of the structure depends on the correlation between a minimization of the daily demand errors and the methodologies used for the electric load prediction. The study on multivariate regression models (Seemingly Unrelated) was done from a Bayesian point of view. This kind of study is very important for the prediction methodology. When developing the model, the electric load curves of the main household appliances were predicted using a Bayesian approach. This fact showed the performance of the metodology on the capture of two types of information: Engineering prediction and CDA prediction. The results obtained using the above method, for describing the data, were better than the classical models.
Bergström, David. „Bayesian optimization for selecting training and validation data for supervised machine learning : using Gaussian processes both to learn the relationship between sets of training data and model performance, and to estimate model performance over the entire problem domain“. Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157327.
Der volle Inhalt der QuelleLi, Qing. „Recurrent-Event Models for Change-Points Detection“. Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/78207.
Der volle Inhalt der QuellePh. D.
Benko, Matej. „Hledaní modelů pohybu a jejich parametrů pro identifikaci trajektorie cílů“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445467.
Der volle Inhalt der QuelleNickless, Alecia. „Regional CO₂ flux estimates for South Africa through inverse modelling“. Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29703.
Der volle Inhalt der QuelleMarković, Dimitrije, und Stefan J. Kiebel. „Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-214867.
Der volle Inhalt der QuelleGuo, Changbin. „Bayesian Reference Inference on the Ratio of Poisson Rates“. Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2194.
Der volle Inhalt der QuelleMarković, Dimitrije, und Stefan J. Kiebel. „Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments“. Frontiers Research Foundation, 2016. https://tud.qucosa.de/id/qucosa%3A30009.
Der volle Inhalt der QuelleBrown, George Gordon Jr. „Comparing Bayesian, Maximum Likelihood and Classical Estimates for the Jolly-Seber Model“. NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010525-142731.
Der volle Inhalt der QuelleBROWN Jr., GEORGE GORDON . Comparing Bayesian, Maximum Likelihood and Classical Estimates for the Jolly-Seber Model. (Under the direction of John F. Monahan and Kenneth H. Pollock)In 1965 Jolly and Seber proposed a model to analyze data for open population capture-recapture studies. Despite frequent use of the Jolly-Seber model, likelihood-based inference is complicated by the presence of a number of unobservable variables that cannot be easily integrated from the likelihood. In order to avoid integration, various statistical methods have been employed to obtain meaningful parameter estimates. Conditional maximum likelihood, suggested by both Jolly and Seber, has become the standard method. Two new parameter estimation methods, applied to the Jolly-Seber Model D, are presented in this thesis. The first new method attempts to obtain maximum likelihood estimates after integrating all of the unobservable variables from the Jolly-Seber Model D likelihood. Most of the unobservable variables can be analytically integrated from the likelihood. However, the variables dealing with the abundance of uncaptured individuals must be numerically integrated. A FORTRAN program was constructed to perform the numerical integration and search for MLEs using a combination of fixed quadrature and Newton's method. Since numerical integration tends to be very time consuming, MLEs could only be obtained from capture-recapture studies with a small number of sampling periods. In order to test the validity of the MLE, a simulation experiment was conducted that obtained MLEs from simulated data for a wide variety of parameter values. Variance estimates for these MLEs were obtained using the Chapman-Robbins lower bound. These variances estimates were used to construct 90% confidence intervals with approximately correct coverage. However, in cases with few recaptures the MLEs performed poorly. In general, the MLEs tended to perform well on a wide variety of the simulated data sets and appears to be a valid tool for estimating population characteristics for open populations. The second new method employs the Gibbs sampler on an unintegrated and an integrated version of the Jolly-Seber Model D likelihood. For both version full conditional distributions are easily obtained for all parameters of interest. However, sampling from these distributions is non-trivial. Two FORTRAN programs were developed to run the Gibbs sampler for the unintegrated and the integrated likelihoods respectively. Means, medians, modes and variances were constructed from the resulting empirical posterior distributions and used for inference. Spectral density was used to construct a variance estimate for the posterior mean. Equal-tailed posterior density regions were directly calculated from the posteriors distributions. A simulation experiment was conducted to test the validity of density regions. These density regions also have approximately the proper coverage provided that the capture probability is not too small. Convergence to a stationary distribution is explored for both version of the likelihood. Often, convergence was difficult to detect, therefore a test of convergence was constructed by comparing two independent chains from both version of the Gibbs sampler. Finally, an experiment was constructed to compare these two new methods and the traditional conditional maximum likelihood estimates using data simulated from a capture-recapture experiment with 4 sampling periods. This experiment showed that there is little difference between the conditional maximum likelihood estimates and the 'true' maximum likelihood estimates when the population size is large. A second simulation experiment was conducted to determine which of the 3 estimation methods provided the 'best' estimators. This experiment was largely inconclusive as no single method routinely outperformed the others
Liu, Liang. „Reconstructing posterior distributions of a species phylogeny using estimated gene tree distributions“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155754980.
Der volle Inhalt der QuelleLuo, Shihua. „Bayesian Estimation of Small Proportions Using Binomial Group Test“. FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/744.
Der volle Inhalt der QuelleShamseldin, Elizabeth C. Smith Richard L. „Asymptotic multivariate kriging using estimated parameters with bayesian prediction methods for non-linear predictands“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,1515.
Der volle Inhalt der QuelleTitle from electronic title page (viewed Sep. 16, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research Statistics." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
Khan, Mohammad Sajjad Coulibaly Paulin. „Climate change impact study on water resources with uncertainty estimates using Bayesian neural network“. *McMaster only, 2006.
Den vollen Inhalt der Quelle findenArruda, Gustavo. „DSGE model with banking sector for emerging economies: estimated using Bayesian methodology for Brazil“. reponame:Repositório Institucional do FGV, 2013. http://hdl.handle.net/10438/10574.
Der volle Inhalt der QuelleApproved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-03-01T19:36:36Z (GMT) No. of bitstreams: 1 Dissertação Gustavo Arruda -final.pdf: 429365 bytes, checksum: 0b43505e1650c187e0c671b7ed538fce (MD5)
Made available in DSpace on 2013-03-01T19:37:31Z (GMT). No. of bitstreams: 1 Dissertação Gustavo Arruda -final.pdf: 429365 bytes, checksum: 0b43505e1650c187e0c671b7ed538fce (MD5) Previous issue date: 2013-01-30
Emerging economies suffer important credit constraint when compared to advanced economies, although, Dynamic Stochastic General Equilibrium models (DSGE models) for emerging economies still needs to advance on this discuss. We propose a DSGE model that intend to represent an emerging economy with a banking sector based on Gerali et al. (2010). Our contribution is to consider the share of expected annual earnings as collateral for the impatient household loans. We estimate the proposed model for Brazil using Bayesian technique and we found that economies with this type of collateral restrictions tend to suffer more rapid to monetary policy shocks due to the exposure of the banking sector to changes in the expected wage.
Economias emergentes sofrem importantes restrições de crédito quando comparadas com economias desenvolvidas, entretanto, modelos estocásticos de equilíbrio geral (DSGE) desenhados para economias emergentes ainda precisam avançar nessa discussão. Nós propomos um modelo DSGE que pretende representar uma economia emergente com setor bancário baseado em Gerali et al. (2010). Nossa contribuição é considerar uma parcela da renda esperada como colateral para empréstimos das famílias. Nós estimamos o modelo proposto para o Brasil utilizando estimação Bayesiana e encontramos que economias que sofrem restrição de colateral por parte das famílias tendem a sentir o impacto de choques monetários mais rapidamente devido a exposição do setor bancário a mudanças no salário esperado.
Breuss, Fritz, und Katrin Rabitsch. „An estimated two-country DSGE model of Austria and the Euro Area“. Europainstitut, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/558/1/document.pdf.
Der volle Inhalt der QuelleSeries: EI Working Papers / Europainstitut
Som, Agniva. „Paradoxes and Priors in Bayesian Regression“. The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406197897.
Der volle Inhalt der QuelleVesper, Andrew Jay. „Three Essays of Applied Bayesian Modeling: Financial Return Contagion, Benchmarking Small Area Estimates, and Time-Varying Dependence“. Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10912.
Der volle Inhalt der QuelleStatistics
Souza, Isaac Jales Costa. „Estima??o bayesiana no modelo pot?ncia normal bimodal assim?trico“. PROGRAMA DE P?S-GRADUA??O EM MATEM?TICA APLICADA E ESTAT?STICA, 2016. https://repositorio.ufrn.br/jspui/handle/123456789/21722.
Der volle Inhalt der QuelleApproved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-01-23T13:11:35Z (GMT) No. of bitstreams: 1 IsaacJalesCostaSouza_DISSERT.pdf: 808186 bytes, checksum: 0218f6e40a4dfea5b56a9d90f17e0bfb (MD5)
Made available in DSpace on 2017-01-23T13:11:35Z (GMT). No. of bitstreams: 1 IsaacJalesCostaSouza_DISSERT.pdf: 808186 bytes, checksum: 0218f6e40a4dfea5b56a9d90f17e0bfb (MD5) Previous issue date: 2016-01-28
Neste trabalho ? apresentada uma abordagem bayesiana dos modelos pot?ncia normal bimodal (PNB) e pot?ncia normal bimodal assim?trico (PNBA). Primeiramente, apresentamos o modelo PNB e especificamos para este prioris n?o informativas e informativas do par?metroque concentra a bimodalidade (?). Em seguida, obtemos a distribui??o a posteriori pelo m?todo MCMC, o qual testamos a viabilidade de seu uso a partir de um diagn?stico de converg?ncia. Depois, utilizamos diferentes prioris informativas para ? e fizemos a an?lise de sensibilidadecom o intuito de avaliar o efeito da varia??o dos hiperpar?metros na distribui??o a posteriori. Tamb?m foi feita uma simula??o para avaliar o desempenho do estimador bayesiano utilizando prioris informativas. Constatamos que a estimativa da moda a posteriori apresentou em geralresultados melhores quanto ao erro quadratico m?dio (EQM) e vi?s percentual (VP) quando comparado ao estimador de m?xima verossimilhan?a. Uma aplica??o com dados bimodais reais foi realizada. Por ?ltimo, introduzimos o modelo de regress?o linear com res?duos PNB. Quanto ao modelo PNBA, tamb?m especificamos prioris informativas e n?o informativas para os par?metros de bimodalidade e assimetria. Fizemos o diagn?stico de converg?ncia para o m?todo MCMC, que tamb?m foi utilizado para obter a distribui??o a posteriori. Fizemos uma an?lise de sensibilidade, aplicamos dados reais no modelo e introduzimos o modelo de regress?o linear com res?duos PNBA.
In this paper it is presented a Bayesian approach to the bimodal power-normal (BPN) models and the bimodal asymmetric power-normal (BAPN). First, we present the BPN model, specifying its non-informative and informative parameter ? (bimodality). We obtain the posterior distribution by MCMC method, whose feasibility of use we tested from a convergence diagnose. After that, We use different informative priors for ? and we do a sensitivity analysis in order to evaluate the effect of hyperparameters variation on the posterior distribution. Also, it is performed a simulation to evaluate the performance of the Bayesian estimator using informative priors. We noted that the Bayesian method shows more satisfactory results when compared to the maximum likelihood method. It is performed an application with bimodal data. Finally, we introduce the linear regression model with BPN error. As for the BAPN model we also specify informative and uninformative priors for bimodality and asymmetry parameters. We do the MCMC Convergence Diagnostics, which is also used to obtain the posterior distribution. We do a sensitivity analysis, applying actual data in the model and we introducing the linear regression model with PNBA error.
Ballesta, Artero Irene Maria. „Influence of the Estimator Selection in Scalloped Hammerhead Shark Stock Assessment“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/24819.
Der volle Inhalt der QuelleMaster of Science
Zhong, Jinquan. „Seismic fragility estimates for corroded reinforced concrete bridge structures with two-column bents“. [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3143.
Der volle Inhalt der QuelleWu, Yi-Fang. „Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers“. Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/5879.
Der volle Inhalt der QuelleLeotti, Vanessa Bielefeldt. „Modelos bayesianos para estimar risco relativo em desfechos binários e politômicos“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/80066.
Der volle Inhalt der QuelleThe odds ratio (OR) and relative risk (RR) are measures of association used in epidemiology. There are discussions about disadvantages of the OR as an measure of association in prospective studies, and that instead of this measure, the RR should be used, especially if the outcome is common (>10%). In the case of binary outcomes and independent data, alternatives to OR estimated by logistic regression were proposed. One is the log-binomial model and other is the Poisson regression with robust variance. Such models allow to identify factors associated with outcome and to estimate the probability of the event for each observational unit. Regarding the estimation of probabilities, the robust Poisson regression has the disadvantage of possibly estimating probabilities greater than 1. This does not occur with the logbinomial model; however, the same can face convergence problems. Some authors recommend the log-binomial model as the first choice of analysis, leaving the use of robust Poisson regression just for situations where the first model does not converge. In 2010, the use of Bayesian methodology was proposed as a way to solve the convergence problems and simulations comparing with the previous approaches were proceeded. However, such simulations had limitations: categorical predictors were not considered; only one sample size was evaluated; only the median and equal tail credible interval were addressed in the Bayesian approach, when there are other options; and the main one, the comparative measures were calculated only for the model coefficients and not for the RR. In this thesis, these limitations have been overcome, and another Bayesian estimator of the RR, the mode, presented less bias and mean squared error in general. The models mentioned above are suitable for analysis of independent observations; however there are cases where this assumption is not valid, as in clustered randomized trials or multilevel modeling. Only five papers were found with proposals of how to estimate the RR in these cases. When the interest is on estimation of the RR with polytomous outcomes, only two studies presented suggestions. In this work, the Bayesian methodology proposed for binary outcomes and independent data was extended to deal with these two situations.
Devulder, Antoine. „Involuntary unemployment and financial frictions in estimated DSGE models“. Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E016/document.
Der volle Inhalt der QuelleThanks to their internal consistency. DSGE models, built on microecoc behavor, have become prevalenl for business cycle and policy analysis in institutions. The recent crisis and governments' concern about persistent unemployment advocate for mechanism, capturing imperfect adjustments in credit and labor markets. However, popular models such as the one of Smets and Wouters (2003-2007), although unsophisticated in their representation of these markets, are able to replicate the data as well as usual econometric tools. It is thus necessary to question the benefits of including these frictions in theoretical models for operational use.ln this thesis, I address this issue and show that microfounded mechanisms specifiç to labor and credit markets can significantly alter the conclusions based on the use of an estimated DSGE model, fom both a positive and a normative perspective.For this purpose, I build a two-country model of France and the rest of the euro area with exogenous rest of the world variables, and estimate it with and without these two frictions using Bayesian techniques. By contrast with existing models, I propose two improvements of the representation of labor markets. First, following Pissarides (2009), only wages in new jobs are negotiated by firms and workers, engendering stickiness in the average real wage. Second, I develop a set of assumptions to make labor market participation endogenous and unemployment involuntary in the sense that the unemployed workers are worse-off that the employed ones. Yet, including this setup in the estimated model is left for future research.Using the four estimated versions of the model, I undertake a number of analyses to highlight the role of financial and labor market frictions : an historical shock decomposition of fluctuations during the crisis, the evaluation of several monetary policy rules, a counterfactual simulation of the crisis under the assumption of a flexible exchange rate regime between France and the rest of the euro area and, lastly, the simulation of social VAT scenarios
Otieno, Wilkistar. „A Framework for Determining the Reliability of Nanoscale Metallic Oxide Semiconductor (MOS) Devices“. Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3499.
Der volle Inhalt der QuelleCosta, Eliardo Guimarães da. „Tamanho amostral para estimar a concentração de organismos em água de lastro: uma abordagem bayesiana“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-05072018-164225/.
Der volle Inhalt der QuelleSample size methodologies for estimating the organism concentration in ballast water and for verifying international standards are developed under a Bayesian approach. We consider the criteria of average coverage, of average length and of total cost minimization under the Poisson model with a gamma prior distribution and the negative binomial model with a Pearson type VI prior distribution. Furthermore, we consider a Dirichlet process as a prior distribution in the Poisson model with the purpose to gain more flexibility and robustness. For practical applications, we implemented computational routines using the R language.
He, Qing. „Investigating the performance of process-observation-error-estimator and robust estimators in surplus production model: a simulation study“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/76859.
Der volle Inhalt der QuelleMaster of Science
Heywood, Ben. „Investigations into the use of quantified Bayesian maximum entropy methods to generate improved distribution maps and biomass estimates from fisheries acoustic survey data /“. St Andrews, 2008. http://hdl.handle.net/10023/512.
Der volle Inhalt der QuellePatel, Ekta, Gurtina Besla und Kaisey Mandel. „Orbits of massive satellite galaxies - II. Bayesian estimates of the Milky Way and Andromeda masses using high-precision astrometry and cosmological simulations“. OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/624428.
Der volle Inhalt der QuelleAlmeida, Josemir Ramos de. „Estima??o cl?ssica e Bayesiana em modelos de sobrevida com fra??o de cura“. Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17012.
Der volle Inhalt der QuelleCoordena??o de Aperfei?oamento de Pessoal de N?vel Superior
In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates
Em An?lise de Sobreviv?ncia, os modelos de longa dura??o permitem a estima??o da fra??o de cura, que representa uma parcela da popula??o imune ao evento de interesse. No referido trabalho abordamos os enfoques cl?ssico e Bayesiano com base nos modelos de mistura padr?o e de tempo de promo??o, utilizando diferentes distribui??es (exponencial, Weibull e Pareto) para modelar os tempos de falhas. A base de dados utilizada para ilustrar as implementa??es ? descrita em Kersey et al. (1987) e consiste em um grupo de pacientes com leucemia que foram submetidos a um certo tipo de transplante. As implementa??es espec?ficas utilizadas foram de otimiza??o num?rica por BFGS implementado em R (base::optim), aproxima??o de Laplace (implementa??o pr?pria) e o amostrador de Gibbs implementado no Open- Bugs. Descrevemos as principais caracter?sticas dos modelos utilizados, os m?todos de estima??o e os aspectos computacionais. Tamb?m discutimos como diferentes prioris podem afetar nas estimativas Bayesianas
Ilangakoon, Nayani Thanuja. „Relationship between leaf area index (LAI) estimated by terrestrial LiDAR and remotely sensed vegetation indices as a proxy to forest carbon sequestration“. Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1402857524.
Der volle Inhalt der QuelleFässler, Sascha M. M. „Target strength variability in Atlantic herring (Clupea harengus) and its effect on acoustic abundance estimates“. Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1703.
Der volle Inhalt der QuelleSilva, Karolina Barone Ribeiro da. „Estimativas de máxima verosimilhança e bayesianas do número de erros de um software“. Universidade Federal de São Carlos, 2006. https://repositorio.ufscar.br/handle/ufscar/4498.
Der volle Inhalt der QuelleIn this work we present the methodology of capture-recapture, under the classic and bayesian approach, to estimate the number of errors of software through inspection by distinct reviewers. We present the general statistical model considering independence among errors and among reviewers and consider the particular cases of equally detectable errors (homogeneous) and reviewers not equally e¢ cient (heterogeneous) and of errors not equally detectable (heterogeneous) and equally e¢ cient reviewers (homogeneous). After that, under the assumption of independence and heterogeneity among errors and independence and homogeneity among reviwers, we supposed that the heterogeneity of the errors was expressed by a classification of these in easy and di¢ cult of detecting, admitting known the probabilities of detection of an easy error and of a di¢ cult error. Finally, under the hypothesis of independence and homogeneity among errors, we presented a new model considering heterogeneity and dependence among reviewers. Besides, we presented examples with simulate and real data.
Nesta dissertação apresentamos a metodologia de captura-recaptura, sob os enfoques clássico e bayesiano, para estimar o número de erros de um software através de sua inspeção por revisores distintos. Apresentamos o modelo estatístico geral considerando independência entre erros e entre revisores e consideramos os casos particulares de erros igualmente.detectáveis (homogêneos) e revisores não igualmente eficientes (heterogêneos) e de erros não igualmente detectáveis (heterogêneos) e revisores igualmente eficientes (homogêneos). Em seguida, sob a hipótese de heterogeneidade e independência entre erros e homogeneidade e independência entre revisores, supusemos que a heterogeneidade dos erros era expressa por uma classificação destes em fácil e difícil de detectar, admitindo conhecidas as probabilidades de detecção de um erro fácil e de um erro difícil. Finalmente, sob a hipótese de independência e homogeneidade entre erros, apresentamos um novo modelo considerando heterogeneidade e dependência entre revisores. Além disso, apresentamos exemplos com dados simulados e reais.
Bruzzone, Andrea. „P-SGLD : Stochastic Gradient Langevin Dynamics with control variates“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140121.
Der volle Inhalt der QuelleJustino, Josivan Ribeiro. „Estimativas de mortalidade para a regi?o nordeste do Brasil em 2010: uma associa??o do m?todo demogr?fico equa??o geral de balanceamento, com o estimador bayesiano emp?rico“. Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/13859.
Der volle Inhalt der QuelleOne of the greatest challenges of demography, nowadays, is to obtain estimates of mortality, in a consistent manner, mainly in small areas. The lack of this information, hinders public health actions and leads to impairment of quality of classification of deaths, generating concern on the part of demographers and epidemiologists in obtaining reliable statistics of mortality in the country. In this context, the objective of this work is to obtain estimates of deaths adjustment factors for correction of adult mortality, by States, meso-regions and age groups in the northeastern region, in 2010. The proposal is based on two lines of observation: a demographic one and a statistical one, considering also two areas of coverage in the States of the Northeast region, the meso-regions, as larger areas and counties, as small areas. The methodological principle is to use the General Equation and Balancing demographic method or General Growth Balance to correct the observed deaths, in larger areas (meso-regions) of the states, since they are less prone to breakage of methodological assumptions. In the sequence, it will be applied the statistical empirical Bayesian estimator method, considering as sum of deaths in the meso-regions, the death value corrected by the demographic method, and as reference of observation of smaller area, the observed deaths in small areas (counties). As results of this combination, a smoothing effect on the degree of coverage of deaths is obtained, due to the association with the empirical Bayesian Estimator, and the possibility of evaluating the degree of coverage of deaths by age groups at counties, meso-regions and states levels, with the advantage of estimete adjustment factors, according to the desired level of aggregation. The results grouped by State, point to a significant improvement of the degree of coverage of deaths, according to the combination of the methods with values above 80%. Alagoas (0.88), Bahia (0.90), Cear? (0.90), Maranh?o (0.84), Para?ba (0.88), Pernambuco (0.93), Piau? (0.85), Rio Grande do Norte (0.89) and Sergipe (0.92). Advances in the control of the registry information in the health system, linked to improvements in socioeconomic conditions and urbanization of the counties, in the last decade, provided a better quality of information registry of deaths in small areas
Um dos grandes desafios atuais da demografia ? obter estimativas de mortalidade, de maneira consistente, principalmente em pequenas ?reas. A car?ncia destas informa??es, dificulta a??es de sa?de p?blica e leva ao comprometimento da qualidade de classifica??o de ?bitos, gerando preocupa??o por parte dos dem?grafos e epidemiologistas na obten??o de estat?sticas confi?veis da mortalidade no Pa?s. Neste contexto, o objetivo deste trabalho ? obter estimativas de fatores de ajuste de ?bitos para corre??o da mortalidade adulta, por estados, mesorregi?es e grupos et?rios na regi?o nordeste, em 2010. A proposta est? pautada sobre duas linhas de observa??o: uma demogr?fica e outra estat?stica, considerando tamb?m duas ?reas de abrang?ncia nos estados da regi?o Nordeste, as mesorregi?es como ?reas maiores e os munic?pios como pequenas ?reas. O principio metodol?gico ? usar o m?todo demogr?fico Equa??o Geral e Balanceamento ou General Growth Balance, para corrigir os ?bitos observados, nas ?reas maiores (mesorregi?es) dos estados, por estas serem regi?es menos prop?cias a quebra dos pressupostos metodol?gicos. Em seguida, ser? aplicado o m?todo estat?stico estimador bayesiano emp?rico, considerando como soma dos ?bitos nas mesorregi?es, o valor de ?bito corrigido pelo m?todo demogr?fico e como refer?ncia de observa??o de ?rea menor os ?bitos observados nas pequenas ?reas (munic?pios). Como resultados desta combina??o, um efeito de suaviza??o do grau de cobertura dos ?bitos ? obtido, fruto da associa??o com o estimador bayesiano emp?rico e a possibilidade de avaliar o grau de cobertura de ?bitos por grupos et?rios em n?vel de munic?pios, mesorregi?es e estado, com a vantagem de estimar fatores de ajuste, conforme o n?vel de agrega??o desejado. Os resultados agrupados por estado, apontam para uma melhora significante do grau de cobertura de ?bitos, segundo a combina??o dos m?todos com valores acima de 80%. Alagoas (0,88), Bahia (0,90), Cear? (0,90), Maranh?o (0,84), Para?ba (0,88), Pernambuco (0,93), Piau? (0,85) , Rio Grande do Norte (0,89) e Sergipe (0,92). Os avan?os no controle do registro das informa??es no sistema de sa?de, associado ?s melhorias nas condi??es socioecon?micas e de urbaniza??o dos munic?pios, na ?ltima d?cada, proporcionaram uma melhor qualidade do registro das informa??es de ?bitos nas pequenas ?reas
Missiagia, Juliano Gallina. „Estimação Bayesiana do tamanho de uma população de diabéticos através de listas de pacientes“. Universidade Federal de São Carlos, 2005. https://repositorio.ufscar.br/handle/ufscar/4550.
Der volle Inhalt der QuelleFinanciadora de Estudos e Projetos
In this work, a bayesian methodology is shown to estimate the size of a diabethic-su¤ering population through lists containing information data of patients. The applied methodology is analogous of capture-recaptures in animal population. We assume correct the registers of relative information to the patients as well as we take in account correct and incorrect registers of the information. In case the supposed registers are correct, the methodology is developed for two or more lists and the Bayes estimate is determined for the size of a population. In a second model, the occurrency of correct and incorrect registers are considered, presenting a two-stage estimation method for the model parameters using two lists. For both models there are results with simulated and real examples.
Nesta dissertação apresentamos uma metodologia bayesiana para estimar o tamanho de uma população de diabéticos através de listas contendo informações sobre dados dos indivíduos. A metodologia aplicada é análoga a de captura-recaptura em população animal. Supomos corretos os registros de informações relativas aos pacientes assim como levamos em consideração registros corretos e incorretos das informações. No caso da suposição dos registros serem corretos, a metodologia é desenvolvida para duas ou mais listas e determinamos estimativas de Bayes para o tamanho populacional. Em um segundo modelo, consideramos a ocorrência de registros corretos e incorretos dos dados relativos aos pacientes, e apresentamos um método de estimação em dois estágios para os parâmetros do modelo utilizando duas listas. Para ambos os modelos, apresentamos resultados com exemplos simulados e reais.
Andrade, Chávez Francisco Mauricio. „Modelo de regresión Dirichlet bayesiano: aplicación para estimar la prevalencia del nivel de anemia infantil en centros poblados del Perú“. Master's thesis, Pontificia Universidad Católica del Perú, 2020. http://hdl.handle.net/20.500.12404/18683.
Der volle Inhalt der QuelleTop, Alioune. „Estimation paramétriques et tests d'hypothèses pour des modèles avec plusieurs ruptures d'un processus de poisson“. Thesis, Le Mans, 2016. http://www.theses.fr/2016LEMA1014/document.
Der volle Inhalt der QuelleThis work is devoted to the parametric estimation, hypothesis testing and goodnessof-fit test problems for non homogenous Poisson processes. First we consider two models having two jumps located by an unknown parameter.For the first model the sum of jumps is positive. The second is a model of switching intensity, piecewise constant and the sum of jumps is zero. Thus, for each model, we studied the asymptotic properties of the Bayesian estimator (BE) andthe likelihood estimator (MLE). The consistency, the convergence in distribution and the convergence of moments are shown. In particular we show that the BE is asymptotically efficient. For the second model we also consider the problem of asimple hypothesis testing against a one- sided alternative. The asymptotic properties (choice of the threshold and power) of Wald test (WT) and the generalized likelihood ratio test (GRLT) are described.For the proofs we use the method of Ibragimov and Khasminskii. This method is based on the weak convergence of the normalized likelihood ratio in the Skorohod space under some tightness criterion of the corresponding families of measure.By numerical simulations, the limiting variances of estimators allows us to conclude that the BE outperforms the MLE. In the situation where the sum of jumps is zero, we developed a numerical approach to obtain the MLE.Then we consider the problem of construction of goodness-of-test for a model with scale parameter. We show that the Cram´er-von Mises type test is asymptotically parameter-free. It is also consistent
Kubo, Hisahiko. „Study on rupture processes of large interplate earthquakes estimated by fully Bayesian source inversions using multi period-band strong-motion data -The 2011 Tohoku-oki and the 2011 Ibaraki-oki earthquakes-“. 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199110.
Der volle Inhalt der QuelleMolinari, Benedetto. „Sticky information and non-pricing policies in DSGE models“. Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7379.
Der volle Inhalt der QuelleEn la segunda parte se presentan nuevas evidencias sobre la publicidad agregada en EE.UU. y se estudian los efectos de la publicidad en la economía usando un modelo dinámico estocástico de equilibrio general. En particular, el capitulo 2 se enfoca en las relaciones de corto plazo entre las mas comunes variables macroeconómicas - consumo agregado, producto interno bruto, totalidad de horas trabajadas en la economía - y la publicidad agregada, con particular atención a la relación de causalidad entre publicidad y consumo. En cambio, el capitulo 3 se enfoca sobre las relaciones de largo plazo, enseñando como la publicidad agregada afecte el nivel de trabajo de la economía. A través del modelo presentado en el capitulo 2, se demuestra que un mayor nivel de publicidad implica un mayor números de oras trabajadas asociadas con un menor nivel de bienestar por los consumidores.
This thesis is organized in two parts. In the first one, I seek to understand the relationship between frictions in information flows among firms and inflation persistence. To this end, I present a novel estimator for the Sticky Information Phillips Curve (Mankiw and Reis, 2002), and I use it to estimate this model with U.S. postwar data. The main result is that the Sticky Information Phillips Curve can match inflation persistence only at the cost of mispredicting inflation variance. I conclude that the Sticky Information Phillips Curve is a valid model to explain inflation persistence but not an overall valid theory of inflation.
The second part presents new evidence about aggregate advertising expenditures in U.S., and analyzes the effect of advertising in the aggregate economy by the mean of a dynamic stochastic general equilibrium model. Chapter 2 focuses on the short run impact of advertising on the aggregate dynamics, and shows that an increase in aggregate advertising significantly increases the aggregate consumption. Chapter 3 focuses on the long run effects of advertising on the labor supply, showing that in economies where aggregate advertising is higher, agents supply more hours of works and are generally worse off in terms of welfare.
Pereira, da Silva Hélio Doyle. „Aplicación de modelos bayesianos para estimar la prevalencia de enfermedad y la sensibilidad y especificidad de tests de diagnóstico clínico sin gold standard“. Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/523505.
Der volle Inhalt der QuelleTwo key aims of diagnostic research are to accurately and precisely estimate disease prevalence and test sensitivity and specificity. Latent class models have been proposed that consider the correlation between subject measures determined by different tests in order to diagnose diseases for which gold standard tests are not available. In some clinical studies, several measures of the same subject are made with the same test under the same conditions (replicated measurements) and thus, replicated measurements for each subject are not independent. In the present study, we propose an extension of the Bayesian latent class Gaussian random effects model to fit the data with binary outcomes for tests with replicated subject measures. We describe an application using data collected on hookworm infection carried out in the municipality of Presidente Figueiredo, Amazonas State, Brazil. In addition, the performance of the proposed model was compared with that of current models (the subject random effects model and the conditional (in)dependent model) through a simulation study. As expected, the proposed model presented better accuracy and precision in the estimations of prevalence, sensitivity and specificity. For adequate disease control the World Health Organization has proposed the diagnosis and treatment of latent tuberculous infection (LTBI) in groups of risk of developing the disease such as children. There is no gold standard (GS) test for the diagnosis of LTBI. Statistical models based on the estimation of latent class allow evaluation of the prevalence of infection and the accuracy of the tests used in the absence of a GS. We conducted a cross-sectional study with children up to 6 years of age who had been vaccinated with the BCG in Manaus, Amazonas- Brazil. The objective of this study was to estimate the prevalence of LTBI in young children in contact with a household case of tuberculosis (TB-HCC) and determine the accuracy and precision of the Tuberculin Skin Test (TST) and QuantiFERON-TB Gold in-tube (QFT) using the latent class model. Fifty percent of the children with TB-HCC had LTBI, with the pre- valence depending on the intensity and length of exposure to the index case. The sensitivity and specificity of TST were 73 % [95 % confidence interval (CI): 53-91] and 97 % (95 % CI: 89-100), respectively, versus 53 % (95 % CI: 41-66) and 81 % (95 % CI: 71-90) for QFT. The positive predictive value of TST in children with TB-HCC was 91 % (95 % CI: 61-99), and for QFT was 74 % (95 % CI: 47-95). This is one of the first studies to estimate the prevalence of M. tuberculosis infection in children and the parameters of its main diagnostic tests by latent class model. The results suggest that children in contact with an index case have a high risk of infection. The accuracy and the predictive values did not show significant differences according to the test applied. Combined use of the two tests in our study showed scarce improvement in the diagnosis of LTBI.
Budde, Kiran Kumar. „A Matlab Toolbox for fMRI Data Analysis: Detection, Estimation and Brain Connectivity“. Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81314.
Der volle Inhalt der QuelleMotrunich, Anastasiia. „Estimation des paramètres pour les séquences de Markov avec application dans des problèmes médico-économiques“. Thesis, Le Mans, 2015. http://www.theses.fr/2015LEMA1009/document.
Der volle Inhalt der QuelleIn the first part of this dissertation we consider several problems of finite-dimensional parameter estimation for Markov sequences in the asymptotics of large samples. The asymptotic behavior of the Bayesian estimators and the estimators of the method of moments are described. It is shown that under regularity conditions these estimators are consistent and asymptotically normal. We show that the Bayesian estimator is asymptotically efficient. The one-step and two-step maximum likelihood estimator-processes are studied. These estimators allow us to construct the asymptotically efficient estimators based on some preliminary estimators, say, the estimators of the method of moments or Bayes estimator and the one-step maximum likelihood estimator structure. We propose particular non-linear autoregressive processes as examples and we illustrate the properties of these estimators with the help of numerical simulations. In the second part we give theapplications of Markov processes in health economics. We compare homogeneous and non-homogeneous Markov models for cost-effectiveness analysis of routine use of transparent dressings containing a chlorhexidine gluconate gel pad versus standard transparent dressings. The antimicrobial dressing protects central vascular accesses reducing the risk of catheter-related bloodstream infections. The impact of the modeling approach on the decision of adopting antimicrobialdressings for critically-ill patients is discussed
Dortel, Emmanuelle. „Croissance de l'albacore (Thunnus albacares) de l'Océan Indien : de la modélisation statistique à la modélisation bio-énergétique“. Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20035/document.
Der volle Inhalt der QuelleSince the early 1960s, the growth of yellowfin has been enjoyed a particular attention both in the research field and for fisheries management. In the Indian Ocean, the management of yellowfin stock, under the jurisdiction of the Indian Ocean Tuna Commission (IOTC), suffers from much uncertainty associated with the growth curve currently considered. In particular, there remain gaps in our knowledge of basic biological and ecological processes regulating growth. Their knowledge is however vital for understanding the stocks productivity and their resilience abilities to fishing pressure and oceanographic changes underway.Through modelling, this study aims to improve current knowledge on the growth of yellowfin population of the Indian Ocean and thus strengthen the scientific advice on the stock status. Whilst most studies on yellowfin growth only rely on one data source, we implemented a hierarchical Bayesian model that exploits various information sources on growth, i.e. direct age estimates obtained through otolith readings, analyzes of modal progressions and individual growth rates derived from mark-recapture experiments, and takes explicitely into account the expert knowledge and the errors associated with each dataset and the growth modelling process. In particular, the growth model was coupled with an ageing error model from repeated otolith readings which significantly improves the age estimates as well as the resulting growth estimates and allows a better assessment of the estimates reliability. The growth curves obtained constitute a major improvement of the growth pattern currently used in the yellowfin stock assessment. They demonstrates that yellowfin exhibits a two-stanzas growth, characterized by a sharp acceleration at the end of juvenile stage. However, they do not provide information on the biological and ecological mechanisms that lie behind the growth acceleration.For a better understanding of factors involved in the acceleration of growth, we implemented a bioenergetic model relying on the principles of Dynamic Energy Budget theory (DEB). Two major assumptions were investigated : (i) a low food availability during juvenile stage in relation with high intra and inter-specific competition and (ii) changes in food diet characterized by the consumption of more energetic prey in older yellowfin. It appears that these two assumption may partially explain the growth acceleration
Modesto, i. Alapont Vicent. „Muerte en UCIP estimada con el índice “PRISM”: comparación de la exactitud diagnóstica de las predicciones realizadas con un modelo de regresión logística y una red neuronal artificial. Una propuesta bayesiana“. Doctoral thesis, Universidad de Alicante, 2011. http://hdl.handle.net/10045/23578.
Der volle Inhalt der QuelleRodriguez, Delphy. „Caractérisation de la pollution urbaine en Île-de-France par une synergie de mesures de surface et de modélisation fine échelle“. Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS341.
Der volle Inhalt der QuelleThe harmful effects of air pollution need a high-resolution concentration estimate. Ambient pollutant concentrations are routinely measured by surface monitoring sites of local agencies (AIRPARIF in Paris area, France). Such networks are not dense enough to represent the strong horizontal gradients of pollutant concentrations over urban areas. And, high-resolution models that simulate 3D pollutant concentration fields have a large spatial coverage but suffer from uncertainties. Those both information sources exploited independently are not able to accurately assess an individual’s exposure. We suggest two approaches to solve this problem : (1) direct pollution measurement by using low cost mobile sensors and reference instruments. A high variability across pollution levels is shown between microenvironments and also in the same room. Mobile sensors should be deployed on a large scale due to their technical constraints. Reference instruments are very expensive, cumbersome, and can only be used occasionally. (2) by combining concentration fields of the Parallel Micro-SWIFT-SPRAY (PMSS) model over Paris at a horizontal resolution of 3 meters with AIRPARIF local ground stations measurements. We determined “representativeness areas” - perimeter where concentrations are very close to the one of the station location – only from PMSS simulations. Next, we developed a Bayesian model to extend the stations measurements within these areas