Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Bayesian estimate.

Dissertationen zum Thema „Bayesian estimate“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Bayesian estimate" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

OLIVEIRA, ANA CRISTINA BERNARDO DE. „BAYESIAN MODEL TO ESTIMATE ADVERTISING RECALL IN MARKETING“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7528@1.

Der volle Inhalt der Quelle
Annotation:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
A importância de sistemas que monitorem continuamente as resposta dos consumidores à propaganda é notadamente reconhecida pela comunidade de pesquisa de mercado. A coleta sistemática deste tipo de informação é importante porque através desta, pode-se revisar campanhas anteriores, corrigir tendências detectadas em pré-testes e melhor orientar as tomadas de decisão nos setores de propaganda. O presente trabalho contém um modelo para tentar medir esta resposta baseada em Modelos Lineares Dinâmicos Generalizados.
Analysis of consumer markets define and attempt to measure many variables in studies of the effectiveness of adversitising. The awareness in a consumer population of a particular advertising is one such quantity, the subject of the above-referenced studies. We define and give the implementation of model based in dynamic Generalised Linear Models which is used to measure this quantity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

James, Peter Welbury. „Design and analysis of studies to estimate cerebral blood flow“. Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rodewald, Oliver Russell. „Use of Bayesian inference to estimate diversion likelihood in a PUREX facility“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/76951.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M. and S.B.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 66-67).
Nuclear Fuel reprocessing is done today with the PUREX process, which has been demonstrated to work at industrial scales at several facilities around the world. Use of the PUREX process results in the creation of a stream of pure plutonium, which allows the process to be potentially used by a proliferator. Safeguards have been put in place by the IAEA and other agencies to guard against the possibility of diversion and misuse, but the cost of these safeguards and the intrusion into a facility they represent could cause a fuel reprocessing facility operator to consider foregoing standard safeguards in favor of diversion detection that is less intrusive. Use of subjective expertise in a Bayesian network offers a unique opportunity to monitor a fuel reprocessing facility while collecting limited information compared to traditional safeguards. This work focuses on the preliminary creation of a proof of concept Bayesian network and its application to a model nuclear fuel reprocessing facility.
by Oliver Russell Rodewald.
S.M.and S.B.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

SOUZA, MARCUS VINICIUS PEREIRA DE. „A BAYESIAN APPROACH TO ESTIMATE THE EFFICIENT OPERATIONAL COSTS OF ELECTRICAL ENERGY UTILITIES“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12361@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Esta tese apresenta os principais resultados de medidas de eficiência dos custos operacionais de 60 distribuidoras brasileiras de energia elétrica. Baseado no esquema yardstick competition, foi utilizado uma Rede Neural d e Kohonen (KNN) para identificar grupos de empresas similares. Os resultados obtidos pela KNN não são determinísticos, visto que os pesos sinápticos da rede são inicializados aleatoriamente. Então, é realizada uma simulação de Monte Carlo para encontrar os clusters mais frequentes. As medidas foram obtidas por modelos DEA (input oriented, com e sem restrições aos pesos) e modelos Bayesianos e frequencistas de fronteira estocástica (utilizando as funções Cobb-Douglas e Translog). Em todos os modelos, DEA e SFA, a única variável input refere-se ao custo operacional (OPEX). Os índices de eficiência destes modelos representam a potencial redução destes custos de acordo com cada concessionária avaliada. Os outputs são os cost drivers da variável OPEX: número de unidades consumidoras (uma proxy da quantidade de serviço), montante de energia distribuída (uma proxy do produto total) e a extensão da rede de distribuição (uma proxy da dispersão dos consumidores na área de concessão). Finalmente, vale registrar que estas técnicas podem mitigar a assimetria de informação e aprimorar a habilidade do agente regulador em comparar os desempenhos das distribuidoras em ambientes de regulação incentivada.
This thesis presents the main results of the cost efficiency scores of 60 Brazilian electricity distribution utilities. Based on yardstick competition scheme, it was applied a Kohonen Neural Networks (KNN) to identify and to group the similar utilities. The KNN results are not deterministic, since the estimated weights are randomly initialized. Thus, a Monte Carlo simulation was used in order to find the most frequent clusters. Therefore was examined the use of the DEA methodology (input oriented, with and without weight constraints) and Bayesian and non- Bayesian Stochastic Frontier Analysis (centered on a Cobb- Douglas and Translog cost functions) to evaluate the cost efficiency scores of electricity distribution utilities. In both models the only input variable is operational cost (OPEX). The efficiency measures from these models reflect the potential of the reduction of operational costs of each utility. The outputs are the cost-drivers of the OPEX: the number of customers (a proxy for the amount of service), the total electric power supplied (a proxy for the amount of product delivered) and the distribution network size (a proxy of the customers scattering in the operating territory of each distribution utility). Finally, it is important to mention that these techniques can reduce the information assimetry to improve the regulator´s skill to compare the performance of the utilities in incentive regulation environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Xiao, Yuqing. „Estimate the True Pass Probability for Near-Real-Time Monitor Challenge Data Using Bayesian Analysis“. Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/20.

Der volle Inhalt der Quelle
Annotation:
The U.S. Army¡¯s Chemical Demilitarization are designed to store, treat and destroy the nation¡¯s aging chemical weapons. It operates Near-Real-Time Monitors and Deport Area Monitoring Systems to detect chemical agent at concentrations before they become dangerous to workers, public health and the environment. CDC recommends that the sampling and analytical methods measure within 25% of the true concentration 95% of the time, and if this criterion is not met the alarm set point or reportable level should be adjusted. Two methods were provided by Army¡¯s Programmatic Laboratory and Monitoring Quality Assurance Plan to evaluate the monitoring systems based on CDC recommendations. This thesis addresses the potential problems associated with these two methods and proposes the Bayesian method in an effort to improve the assessment. Comparison of simulation results indicates that Bayesian method produces a relatively better estimate for verifying monitoring system performance as long as the prior given is correct.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

HUAMANI, LUIS ALBERTO NAVARRO. „A BAYESIAN PROCEDUCE TO ESTIMATE THE INDIVIDUAL CONTRIBUTION OF INDIVIDUAL END USES IN RESIDENCIAL ELECTRICAL ENERGY CONSUMPTION“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1997. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8691@1.

Der volle Inhalt der Quelle
Annotation:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta dissertação investiga a utilização do Modelo de Regressão Multivariada Seemingly Unrelated sob uma perspectiva Bayesiana, na estimação das curvas de carga dos principais eletrodomésticos. Será utilizada uma estrutura de Demanda Condicional (CDA), consideradas de especial interesse no setor comercial e residencial para o gerenciamento pelo lado da demanda (Demand Side Management) dos hábitos dos consumidores residenciais. O trabalho envolve três partes principais: uma apresentação das metodologias estatísticas clássicas usadas para estimar as curvas de cargas; um estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated usando uma aproximação Bayesiana. E por último o desenvolvimento do modelo num estudo de caso. Na apresentação das metodologias clássicas fez-se um levantamento preliminar da estrutura CDA para casos univariados usando Regressão Múltipla, e multivariada usando Regressão Multivariada Seemingly Unrelated, onde o desempenho desta estrutura depende da estrutura de correlação entre os erros de consumo horário durante um dia específico; assim como as metodologias usadas para estimar as curvas de cargas. No estudo sobre Modelos de Regressão Multivariada Seemingly Unrelated a partir da abordagem Bayesiana considerou-se um fator importante no desempenho da metodologia de estimação, a saber: informação a priori. No desenvolvimento do modelo, foram estimadas as curvas de cargas dos principais eletrodomésticos numa abordagem Bayesiana mostrando o desempenho da metodologia na captura de ambos tipos de informação: estimativas de engenharia e estimativas CDA. Os resultados obtidos avaliados pelo método acima comprovaram superioridade na explicação de dados em relação aos modelos clássicos.
The present dissertation investigates the use of multivariate regression models from a Bayesian point of view. These models were used to estimate the electric load behavior of household end uses. A conditional demand structure was used considering its application to the demand management of the residential and commercial consumers. This work is divided in three main parts: a description of the classical statistical methodologies used for the electric load prediction, a study of the multivariate regression models using a Bayesian approach and a further development of the model applied to a case study. A preliminary revision of the CDA structure was done for univariate cases using multiple regression. A similar revision was done for other cases using multivariate regression (Seemingly Unrelated). In those cases, the behavior of the structure depends on the correlation between a minimization of the daily demand errors and the methodologies used for the electric load prediction. The study on multivariate regression models (Seemingly Unrelated) was done from a Bayesian point of view. This kind of study is very important for the prediction methodology. When developing the model, the electric load curves of the main household appliances were predicted using a Bayesian approach. This fact showed the performance of the metodology on the capture of two types of information: Engineering prediction and CDA prediction. The results obtained using the above method, for describing the data, were better than the classical models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bergström, David. „Bayesian optimization for selecting training and validation data for supervised machine learning : using Gaussian processes both to learn the relationship between sets of training data and model performance, and to estimate model performance over the entire problem domain“. Thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157327.

Der volle Inhalt der Quelle
Annotation:
Validation and verification in machine learning is an open problem which becomes increasingly important as its applications becomes more critical. Amongst the applications are autonomous vehicles and medical diagnostics. These systems all needs to be validated before being put into use or else the consequences might be fatal. This master’s thesis focuses on improving both learning and validating machine learning models in cases where data can either be generated or collected based on a chosen position. This can for example be taking and labeling photos at the position or running some simulation which generates data from the chosen positions. The approach is twofold. The first part concerns modeling the relationship between any fixed-size set of positions and some real valued performance measure. The second part involves calculating such a performance measure by estimating the performance over a region of positions. The result is two different algorithms, both variations of Bayesian optimization. The first algorithm models the relationship between a set of points and some performance measure while also optimizing the function and thus finding the set of points which yields the highest performance. The second algorithm uses Bayesian optimization to approximate the integral of performance over the region of interest. The resulting algorithms are validated in two different simulated environments. The resulting algorithms are applicable not only to machine learning but can also be used to optimize any function which takes a set of positions and returns a value, but are more suitable when the function is expensive to evaluate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Qing. „Recurrent-Event Models for Change-Points Detection“. Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/78207.

Der volle Inhalt der Quelle
Annotation:
The driving risk of novice teenagers is the highest during the initial period after licensure but decreases rapidly. This dissertation develops recurrent-event change-point models to detect the time when driving risk decreases significantly for novice teenager drivers. The dissertation consists of three major parts: the first part applies recurrent-event change-point models with identical change-points for all subjects; the second part proposes models to allow change-points to vary among drivers by a hierarchical Bayesian finite mixture model; the third part develops a non-parametric Bayesian model with a Dirichlet process prior. In the first part, two recurrent-event change-point models to detect the time of change in driving risks are developed. The models are based on a non-homogeneous Poisson process with piecewise constant intensity functions. It is shown that the change-points only occur at the event times and the maximum likelihood estimators are consistent. The proposed models are applied to the Naturalistic Teenage Driving Study, which continuously recorded textit{in situ} driving behaviour of 42 novice teenage drivers for the first 18 months after licensure using sophisticated in-vehicle instrumentation. The results indicate that crash and near-crash rate decreases significantly after 73 hours of independent driving after licensure. The models in part one assume identical change-points for all drivers. However, several studies showed that different patterns of risk change over time might exist among the teenagers, which implies that the change-points might not be identical among drivers. In the second part, change-points are allowed to vary among drivers by a hierarchical Bayesian finite mixture model, considering that clusters exist among the teenagers. The prior for mixture proportions is a Dirichlet distribution and a Markov chain Monte Carlo algorithm is developed to sample from the posterior distributions. DIC is used to determine the best number of clusters. Based on the simulation study, the model gives fine results under different scenarios. For the Naturalist Teenage Driving Study data, three clusters exist among the teenagers: the change-points are 52.30, 108.99 and 150.20 hours of driving after first licensure correspondingly for the three clusters; the intensity rates increase for the first cluster while decrease for other two clusters; the change-point of the first cluster is the earliest and the average intensity rate is the highest. In the second part, model selection is conducted to determine the number of clusters. An alternative is the Bayesian non-parametric approach. In the third part, a Dirichlet process Mixture Model is proposed, where the change-points are assigned a Dirichlet process prior. A Markov chain Monte Carlo algorithm is developed to sample from the posterior distributions. Automatic clustering is expected based on change-points without specifying the number of latent clusters. Based on the Dirichlet process mixture model, three clusters exist among the teenage drivers for the Naturalistic Teenage Driving Study. The change-points of the three clusters are 96.31, 163.83, and 279.19 hours. The results provide critical information for safety education, safety countermeasure development, and Graduated Driver Licensing policy making.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Benko, Matej. „Hledaní modelů pohybu a jejich parametrů pro identifikaci trajektorie cílů“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445467.

Der volle Inhalt der Quelle
Annotation:
Táto práca sa zaoberá odstraňovaním šumu, ktorý vzniká z tzv. multilateračných meraní leteckých cieľov. Na tento účel bude využitá najmä teória Bayesovských odhadov. Odvodí sa aposteriórna hustota skutočnej (presnej) polohy lietadla. Spolu s polohou (alebo aj rýchlosťou) lietadla bude odhadovaná tiež geometria trajektórie lietadla, ktorú lietadlo v aktuálnom čase sleduje a tzv. procesný šum, ktorý charakterizuje ako moc sa skutočná trajektória môže od tejto líšiť. Odhad spomínaného procesného šumu je najdôležitejšou časťou tejto práce. Je odvodený prístup maximálnej vierohodnosti a Bayesovský prístup a ďalšie rôzne vylepšenia a úpravy týchto prístupov. Tie zlepšujú odhad pri napr. zmene manévru cieľa alebo riešia problém počiatočnej nepresnosti odhadu maximálnej vierohodnosti. Na záver je ukázaná možnosť kombinácie prístupov, t.j. odhad spolu aj geometrie aj procesného šumu.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nickless, Alecia. „Regional CO₂ flux estimates for South Africa through inverse modelling“. Doctoral thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29703.

Der volle Inhalt der Quelle
Annotation:
Bayesian inverse modelling provides a top-down technique of verifying emissions and uptake of carbon dioxide (CO₂) from both natural and anthropogenic sources. It relies on accurate measurements of CO₂ concentrations at appropriately placed sites and "best-guess" initial estimates of the biogenic and anthropogenic emissions, together with uncertainty estimates. The Bayesian framework improves current estimates of CO₂ fluxes based on independent measurements of CO₂ concentrations while being constrained by the initial estimates of these fluxes. Monitoring, reporting and verification (MRV) is critical for establishing whether emission reducing activities to mitigate the effects of climate change are being effective, and the Bayesian inverse modelling approach of correcting CO₂ flux estimates provides one of the tools regulators and researchers can use to refine these emission estimates. South Africa is known to be the largest emitter of CO₂ on the African continent. The first major objective of this research project was to carry out such an optimal network design for South Africa. This study used fossil fuel emission estimates from a satellite product based on observations of night-time lights and locations of power stations (Fossil Fuel Data Assimilations System (FFDAS)), and biogenic productivity estimates from a carbon assessment carried out for South Africa to provide the initial CO₂ flux estimates and their uncertainties. Sensitivity analyses considered changes to the covariance matrix and spatial scale of the inversion, as well as different optimisation algorithms, to assess the impact of these specifications on the optimal network solution. This question was addressed in Chapters 2 and 3. The second major objective of this project was to use the Bayesian inverse modelling approach to obtain estimates of CO₂ fluxes over Cape Town and surrounding area. I collected measurements of atmospheric CO₂ concentrations from March 2012 until July 2013 at Robben Island and Hangklip lighthouses. CABLE (Community Atmosphere Biosphere Land Exchange), a land-atmosphere exchange model, provided the biogenic estimates of CO₂ fluxes and their uncertainties. Fossil fuel estimates and uncertainties were obtained by means of an inventory analysis for Cape Town. As an inventory analysis was not available for Cape Town, this exercise formed an additional objective of the project, presented in Chapter 4. A spatially and temporally explicit, high resolution surface of fossil fuel emission estimates was derived from road vehicle, aviation and shipping vessel count data, population census data, and industrial fuel use statistics, making use of well-established emission factors. The city-scale inversion for Cape Town solved for weekly fluxes of CO₂ emissions on a 1 km × 1 km grid, keeping fossil fuel and biogenic emissions as separate sources. I present these results for the Cape Town inversion under the proposed best available configuration of the Bayesian inversion framework in Chapter 5. Due to the large number of CO₂ sources at this spatial and temporal resolution, the reference inversion solved for weekly fluxes in blocks of four weeks at a time. As the uncertainties around the biogenic flux estimates were large, the inversion corrected the prior fluxes predominantly through changes to the biogenic fluxes. I demonstrated the benefit of using a control vector with separate terms for fossil fuel and biogenic flux components. Sensitivity analyses, solving for average weekly fluxes within a monthly inversion, as well as solving for separate weekly fluxes (i.e. solving in one week blocks) were considered. Sensitivity analyses were performed which focused on how changes to the prior information and prior uncertainty estimates and the error correlations of the fluxes would impact on the Bayesian inversion solution. The sensitivity tests are presented in Chapter 6. These sensitivity analyses indicated that refining the estimates of biogenic fluxes and reducing their uncertainties, as well as taking advantage of spatial correlation between areas of homogeneous biota would lead to the greatest improvement in the accuracy and precision of the posterior fluxes from the Cape Town metropolitan area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Marković, Dimitrije, und Stefan J. Kiebel. „Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-214867.

Der volle Inhalt der Quelle
Annotation:
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Guo, Changbin. „Bayesian Reference Inference on the Ratio of Poisson Rates“. Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2194.

Der volle Inhalt der Quelle
Annotation:
Bayesian reference analysis is a method of determining the prior under the Bayesian paradigm. It incorporates as little information as possible from the experiment. Estimation of the ratio of two independent Poisson rates is a common practical problem. In this thesis, the method of reference analysis is applied to derive the posterior distribution of the ratio of two independent Poisson rates, and then to construct point and interval estimates based on the reference posterior. In addition, the Frequentist coverage property of HPD intervals is verified through simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Marković, Dimitrije, und Stefan J. Kiebel. „Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments“. Frontiers Research Foundation, 2016. https://tud.qucosa.de/id/qucosa%3A30009.

Der volle Inhalt der Quelle
Annotation:
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Brown, George Gordon Jr. „Comparing Bayesian, Maximum Likelihood and Classical Estimates for the Jolly-Seber Model“. NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010525-142731.

Der volle Inhalt der Quelle
Annotation:

BROWN Jr., GEORGE GORDON . Comparing Bayesian, Maximum Likelihood and Classical Estimates for the Jolly-Seber Model. (Under the direction of John F. Monahan and Kenneth H. Pollock)In 1965 Jolly and Seber proposed a model to analyze data for open population capture-recapture studies. Despite frequent use of the Jolly-Seber model, likelihood-based inference is complicated by the presence of a number of unobservable variables that cannot be easily integrated from the likelihood. In order to avoid integration, various statistical methods have been employed to obtain meaningful parameter estimates. Conditional maximum likelihood, suggested by both Jolly and Seber, has become the standard method. Two new parameter estimation methods, applied to the Jolly-Seber Model D, are presented in this thesis. The first new method attempts to obtain maximum likelihood estimates after integrating all of the unobservable variables from the Jolly-Seber Model D likelihood. Most of the unobservable variables can be analytically integrated from the likelihood. However, the variables dealing with the abundance of uncaptured individuals must be numerically integrated. A FORTRAN program was constructed to perform the numerical integration and search for MLEs using a combination of fixed quadrature and Newton's method. Since numerical integration tends to be very time consuming, MLEs could only be obtained from capture-recapture studies with a small number of sampling periods. In order to test the validity of the MLE, a simulation experiment was conducted that obtained MLEs from simulated data for a wide variety of parameter values. Variance estimates for these MLEs were obtained using the Chapman-Robbins lower bound. These variances estimates were used to construct 90% confidence intervals with approximately correct coverage. However, in cases with few recaptures the MLEs performed poorly. In general, the MLEs tended to perform well on a wide variety of the simulated data sets and appears to be a valid tool for estimating population characteristics for open populations. The second new method employs the Gibbs sampler on an unintegrated and an integrated version of the Jolly-Seber Model D likelihood. For both version full conditional distributions are easily obtained for all parameters of interest. However, sampling from these distributions is non-trivial. Two FORTRAN programs were developed to run the Gibbs sampler for the unintegrated and the integrated likelihoods respectively. Means, medians, modes and variances were constructed from the resulting empirical posterior distributions and used for inference. Spectral density was used to construct a variance estimate for the posterior mean. Equal-tailed posterior density regions were directly calculated from the posteriors distributions. A simulation experiment was conducted to test the validity of density regions. These density regions also have approximately the proper coverage provided that the capture probability is not too small. Convergence to a stationary distribution is explored for both version of the likelihood. Often, convergence was difficult to detect, therefore a test of convergence was constructed by comparing two independent chains from both version of the Gibbs sampler. Finally, an experiment was constructed to compare these two new methods and the traditional conditional maximum likelihood estimates using data simulated from a capture-recapture experiment with 4 sampling periods. This experiment showed that there is little difference between the conditional maximum likelihood estimates and the 'true' maximum likelihood estimates when the population size is large. A second simulation experiment was conducted to determine which of the 3 estimation methods provided the 'best' estimators. This experiment was largely inconclusive as no single method routinely outperformed the others

APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Liu, Liang. „Reconstructing posterior distributions of a species phylogeny using estimated gene tree distributions“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155754980.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Luo, Shihua. „Bayesian Estimation of Small Proportions Using Binomial Group Test“. FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/744.

Der volle Inhalt der Quelle
Annotation:
Group testing has long been considered as a safe and sensible relative to one-at-a-time testing in applications where the prevalence rate p is small. In this thesis, we applied Bayes approach to estimate p using Beta-type prior distribution. First, we showed two Bayes estimators of p from prior on p derived from two different loss functions. Second, we presented two more Bayes estimators of p from prior on π according to two loss functions. We also displayed credible and HPD interval for p. In addition, we did intensive numerical studies. All results showed that the Bayes estimator was preferred over the usual maximum likelihood estimator (MLE) for small p. We also presented the optimal β for different p, m, and k.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Shamseldin, Elizabeth C. Smith Richard L. „Asymptotic multivariate kriging using estimated parameters with bayesian prediction methods for non-linear predictands“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,1515.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Sep. 16, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research Statistics." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Khan, Mohammad Sajjad Coulibaly Paulin. „Climate change impact study on water resources with uncertainty estimates using Bayesian neural network“. *McMaster only, 2006.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Arruda, Gustavo. „DSGE model with banking sector for emerging economies: estimated using Bayesian methodology for Brazil“. reponame:Repositório Institucional do FGV, 2013. http://hdl.handle.net/10438/10574.

Der volle Inhalt der Quelle
Annotation:
Submitted by Gustavo Arruda (gustavofea@gmail.com) on 2013-03-01T19:26:05Z No. of bitstreams: 1 Dissertação Gustavo Arruda -final.pdf: 429365 bytes, checksum: 0b43505e1650c187e0c671b7ed538fce (MD5)
Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2013-03-01T19:36:36Z (GMT) No. of bitstreams: 1 Dissertação Gustavo Arruda -final.pdf: 429365 bytes, checksum: 0b43505e1650c187e0c671b7ed538fce (MD5)
Made available in DSpace on 2013-03-01T19:37:31Z (GMT). No. of bitstreams: 1 Dissertação Gustavo Arruda -final.pdf: 429365 bytes, checksum: 0b43505e1650c187e0c671b7ed538fce (MD5) Previous issue date: 2013-01-30
Emerging economies suffer important credit constraint when compared to advanced economies, although, Dynamic Stochastic General Equilibrium models (DSGE models) for emerging economies still needs to advance on this discuss. We propose a DSGE model that intend to represent an emerging economy with a banking sector based on Gerali et al. (2010). Our contribution is to consider the share of expected annual earnings as collateral for the impatient household loans. We estimate the proposed model for Brazil using Bayesian technique and we found that economies with this type of collateral restrictions tend to suffer more rapid to monetary policy shocks due to the exposure of the banking sector to changes in the expected wage.
Economias emergentes sofrem importantes restrições de crédito quando comparadas com economias desenvolvidas, entretanto, modelos estocásticos de equilíbrio geral (DSGE) desenhados para economias emergentes ainda precisam avançar nessa discussão. Nós propomos um modelo DSGE que pretende representar uma economia emergente com setor bancário baseado em Gerali et al. (2010). Nossa contribuição é considerar uma parcela da renda esperada como colateral para empréstimos das famílias. Nós estimamos o modelo proposto para o Brasil utilizando estimação Bayesiana e encontramos que economias que sofrem restrição de colateral por parte das famílias tendem a sentir o impacto de choques monetários mais rapidamente devido a exposição do setor bancário a mudanças no salário esperado.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Breuss, Fritz, und Katrin Rabitsch. „An estimated two-country DSGE model of Austria and the Euro Area“. Europainstitut, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/558/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
We present a two-country New Open Economy Macro model of the Austrian economy within the European Union's Economic & Monetary Union (EMU). The model includes both nominal and real frictions that have proven to be important in matching business cycle facts, and that allows for an investigation of the effects and cross-country transmission of a number of structural shocks: shocks to technologies, shocks to preferences, cost-push type shocks and policy shocks. The model is estimated using Bayesian methods on quarterly data covering the period of 1976:Q1- 2005:Q1. In addition to the assessment of the relative importance of various shocks, the model also allows to investigate effects of the monetary regime switch with the final stage of the EMU and investigates in how far this has altered macroeconomic transmission. We find that Austria's economy appears to react stronger to demand shocks, while in the rest of the Euro Area supply shocks have a stronger impact. Comparing the estimations on pre-EMU and EMU subsamples we find that the contribution of (rest of the) Euro Area shocks to Austria's business cycle fluctuations has increased significantly. (author´s abstract)
Series: EI Working Papers / Europainstitut
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Som, Agniva. „Paradoxes and Priors in Bayesian Regression“. The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1406197897.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Vesper, Andrew Jay. „Three Essays of Applied Bayesian Modeling: Financial Return Contagion, Benchmarking Small Area Estimates, and Time-Varying Dependence“. Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10912.

Der volle Inhalt der Quelle
Annotation:
This dissertation is composed of three chapters, each an application of Bayesian statistical models to particular research questions. In Chapter 1, we evaluate systemic risk exposure of financial institutions. Building upon traditional regime switching approaches, we propose a network model for volatility contagion to assess linkages between institutions in the financial system. Focusing empirical analysis on the financial sector, we find that network connectivity has dynamic properties, with linkages between institutions increasing immediately before the recent crisis. Out-of-sample forecasts demonstrate the ability of the model to predict losses during distress periods. We find that institutional exposure to crisis events depends upon the structure of linkages, not strictly the number of linkages. In Chapter 2, we develop procedures for benchmarking small area estimates. In sample surveys, precision can be increased by introducing small area models which "borrow strength" by incorporating auxiliary covariate information. One consequence of using small area models is that small area estimates at lower geographical levels typically will not aggregate to the estimate at the corresponding higher geographical levels. Benchmarking is the statistical procedure for reconciling these differences. Two new approaches to Bayesian benchmarking are introduced, one procedure based on Minimum Discrimination Information, and another for Bayesian self-consistent conditional benchmarking. Notably the proposed procedures construct adjusted posterior distributions whose moments all satisfy benchmarking constraints. In the context of the Fay-Herriot model, simulations are conducted to assess benchmarking performance. In Chapter 3, we exploit the Pair Copula Construction (PCC) to develop a flexible multivariate model for time-varying dependence. The PCC is an extremely flexible model for capturing complex, but static, multivariate dependency. We use a Bayesian framework to extend the PCC to account for time dynamic dependence structures. In particular, we model the time series of a transformation of parameters of the PCC as an autoregressive model, conducting inference using a Markov Chain Monte Carlo algorithm. We use financial data to illustrate empirical evidence for the existence of time dynamic dependence structures, show improved out-of-sample forecasts for our time dynamic PCC, and assess performance of dynamic PCC models for forecasting Value-at-Risk.
Statistics
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Souza, Isaac Jales Costa. „Estima??o bayesiana no modelo pot?ncia normal bimodal assim?trico“. PROGRAMA DE P?S-GRADUA??O EM MATEM?TICA APLICADA E ESTAT?STICA, 2016. https://repositorio.ufrn.br/jspui/handle/123456789/21722.

Der volle Inhalt der Quelle
Annotation:
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-01-13T12:23:37Z No. of bitstreams: 1 IsaacJalesCostaSouza_DISSERT.pdf: 808186 bytes, checksum: 0218f6e40a4dfea5b56a9d90f17e0bfb (MD5)
Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-01-23T13:11:35Z (GMT) No. of bitstreams: 1 IsaacJalesCostaSouza_DISSERT.pdf: 808186 bytes, checksum: 0218f6e40a4dfea5b56a9d90f17e0bfb (MD5)
Made available in DSpace on 2017-01-23T13:11:35Z (GMT). No. of bitstreams: 1 IsaacJalesCostaSouza_DISSERT.pdf: 808186 bytes, checksum: 0218f6e40a4dfea5b56a9d90f17e0bfb (MD5) Previous issue date: 2016-01-28
Neste trabalho ? apresentada uma abordagem bayesiana dos modelos pot?ncia normal bimodal (PNB) e pot?ncia normal bimodal assim?trico (PNBA). Primeiramente, apresentamos o modelo PNB e especificamos para este prioris n?o informativas e informativas do par?metroque concentra a bimodalidade (?). Em seguida, obtemos a distribui??o a posteriori pelo m?todo MCMC, o qual testamos a viabilidade de seu uso a partir de um diagn?stico de converg?ncia. Depois, utilizamos diferentes prioris informativas para ? e fizemos a an?lise de sensibilidadecom o intuito de avaliar o efeito da varia??o dos hiperpar?metros na distribui??o a posteriori. Tamb?m foi feita uma simula??o para avaliar o desempenho do estimador bayesiano utilizando prioris informativas. Constatamos que a estimativa da moda a posteriori apresentou em geralresultados melhores quanto ao erro quadratico m?dio (EQM) e vi?s percentual (VP) quando comparado ao estimador de m?xima verossimilhan?a. Uma aplica??o com dados bimodais reais foi realizada. Por ?ltimo, introduzimos o modelo de regress?o linear com res?duos PNB. Quanto ao modelo PNBA, tamb?m especificamos prioris informativas e n?o informativas para os par?metros de bimodalidade e assimetria. Fizemos o diagn?stico de converg?ncia para o m?todo MCMC, que tamb?m foi utilizado para obter a distribui??o a posteriori. Fizemos uma an?lise de sensibilidade, aplicamos dados reais no modelo e introduzimos o modelo de regress?o linear com res?duos PNBA.
In this paper it is presented a Bayesian approach to the bimodal power-normal (BPN) models and the bimodal asymmetric power-normal (BAPN). First, we present the BPN model, specifying its non-informative and informative parameter ? (bimodality). We obtain the posterior distribution by MCMC method, whose feasibility of use we tested from a convergence diagnose. After that, We use different informative priors for ? and we do a sensitivity analysis in order to evaluate the effect of hyperparameters variation on the posterior distribution. Also, it is performed a simulation to evaluate the performance of the Bayesian estimator using informative priors. We noted that the Bayesian method shows more satisfactory results when compared to the maximum likelihood method. It is performed an application with bimodal data. Finally, we introduce the linear regression model with BPN error. As for the BAPN model we also specify informative and uninformative priors for bimodality and asymmetry parameters. We do the MCMC Convergence Diagnostics, which is also used to obtain the posterior distribution. We do a sensitivity analysis, applying actual data in the model and we introducing the linear regression model with PNBA error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ballesta, Artero Irene Maria. „Influence of the Estimator Selection in Scalloped Hammerhead Shark Stock Assessment“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/24819.

Der volle Inhalt der Quelle
Annotation:
In natural sciences, frequentist paradigm has led statistical practice; however, Bayesian approach has been gaining strength in the last decades. Our study assessed the scalloped hammerhead shark population on the western North Atlantic Ocean using Bayesian methods. This approach allowed incorporate diverse types of errors in the surplus production model and compare the influences of different statistical estimators on the values of the key parameters (r, growth rate; K carrying capacity; depletion, FMSY , fishing levels that would sustain maximum yield; and NMSY, abundance at maximum sustainable yield). Furthermore, we considered multi-levelpriors due to the variety of results on the population growth rate of this species. Our research showed that estimator selection influences the results of the surplus production model and therefore, the value of the target management points. Based on key parameter estimates with uncertainty and Deviance Information Criterion, we suggest that state-space Bayesian models be used for assessing scalloped hammerhead shark or other fish stocks with poor data available. This study found the population was overfished and suffering overfishing. Therefore, based on our research and that there was very low evidence of recovery according with the last data available, we suggest prohibition of fishing for this species because: (1) it is highly depleted (14% of its initial population), (2) the fishery status is very unstable over time, (3) it has a low reproductive rate contributing to a higher risk of overexploitation, and (4) the easiness of misidentification among different hammerhead sharks (smooth, great, scalloped and cryptic species).
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhong, Jinquan. „Seismic fragility estimates for corroded reinforced concrete bridge structures with two-column bents“. [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3143.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wu, Yi-Fang. „Accuracy and variability of item parameter estimates from marginal maximum a posteriori estimation and Bayesian inference via Gibbs samplers“. Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/5879.

Der volle Inhalt der Quelle
Annotation:
Item response theory (IRT) uses a family of statistical models for estimating stable characteristics of items and examinees and defining how these characteristics interact in describing item and test performance. With a focus on the three-parameter logistic IRT (Birnbaum, 1968; Lord, 1980) model, the current study examines the accuracy and variability of the item parameter estimates from the marginal maximum a posteriori estimation via an expectation-maximization algorithm (MMAP/EM) and the Markov chain Monte Carlo Gibbs sampling (MCMC/GS) approach. In the study, the various factors which have an impact on the accuracy and variability of the item parameter estimates are discussed, and then further evaluated through a large scale simulation. The factors of interest include the composition and length of tests, the distribution of underlying latent traits, the size of samples, and the prior distributions of discrimination, difficulty, and pseudo-guessing parameters. The results of the two estimation methods are compared to determine the lower limit--in terms of test length, sample size, test characteristics, and prior distributions of item parameters--at which the methods can satisfactorily recover item parameters and efficiently function in reality. For practitioners, the results help to define limits on the appropriate use of the BILOG-MG (which implements MMAP/EM) and also, to assist in deciding the utility of OpenBUGS (which carries out MCMC/GS) for item parameter estimation in practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Leotti, Vanessa Bielefeldt. „Modelos bayesianos para estimar risco relativo em desfechos binários e politômicos“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/80066.

Der volle Inhalt der Quelle
Annotation:
A razão de chances (RC) e o risco relativo (RR) são medidas de associação utilizadas em epidemiologia. Existem discussões sobre desvantagens da RC como medida de associação em delineamentos prospectivos, e que nestes o RR deve ser utilizado, especialmente se o desfecho for comum (>10%). No caso de desfechos binários e dados independentes, alternativas ao uso da RC estimada pela regressão logística foram propostas. Uma delas é o modelo log-binomial e outra é a regressão de Poisson com variância robusta. Tais modelos permitem identificar fatores associados ao desfecho e estimar a probabilidade do evento para cada unidade observacional. Em relação à estimação das probabilidades, a regressão de Poisson robusta tem como desvantagem a possibilidade de estimar probabilidades maiores que 1. Isto não ocorre com o modelo log-binomial, entretanto, o mesmo pode enfrentar problemas de convergência. Alguns autores recomendam que o modelo log-binomial seja a primeira escolha de análise, deixando-se o uso da regressão de Poisson robusta apenas para as situações em que o primeiro método não converge. Em 2010, o uso de metodologia bayesiana foi proposta como maneira de solucionar os problemas de convergência e simulações comparando com as abordagens anteriores foram procedidas. No entanto, tais simulações tiveram limitações: preditores categóricos não foram considerados; apenas um tamanho de amostra foi avaliado; apenas a mediana e o intervalo de credibilidade de caudas iguais foram considerados na abordagem bayesiana, quando existem outras opções; e a principal delas, as medidas comparativas foram calculadas para os coeficientes do modelo e não para o RR. Nesta tese, tais limitações foram superadas, e encontrou-se outro estimador bayesiano para o RR, a moda, com menor viés e erro quadrático médio em geral. Os modelos citados anteriormente são apropriados para análise de observações independentes, entretanto há casos em que esta suposição não é válida, como em ensaios clínicos randomizados em cluster ou modelagem multinível. Apenas cinco trabalhos foram encontrados com propostas de como estimar o RR para esses casos. Quando o interesse é a estimação do RR com desfechos politômicos, apenas dois trabalhos apresentaram sugestões. Conseguiu-se neste trabalho estender a metodologia bayesiana proposta para desfechos binários e dados independentes para lidar com essas duas situações.
The odds ratio (OR) and relative risk (RR) are measures of association used in epidemiology. There are discussions about disadvantages of the OR as an measure of association in prospective studies, and that instead of this measure, the RR should be used, especially if the outcome is common (>10%). In the case of binary outcomes and independent data, alternatives to OR estimated by logistic regression were proposed. One is the log-binomial model and other is the Poisson regression with robust variance. Such models allow to identify factors associated with outcome and to estimate the probability of the event for each observational unit. Regarding the estimation of probabilities, the robust Poisson regression has the disadvantage of possibly estimating probabilities greater than 1. This does not occur with the logbinomial model; however, the same can face convergence problems. Some authors recommend the log-binomial model as the first choice of analysis, leaving the use of robust Poisson regression just for situations where the first model does not converge. In 2010, the use of Bayesian methodology was proposed as a way to solve the convergence problems and simulations comparing with the previous approaches were proceeded. However, such simulations had limitations: categorical predictors were not considered; only one sample size was evaluated; only the median and equal tail credible interval were addressed in the Bayesian approach, when there are other options; and the main one, the comparative measures were calculated only for the model coefficients and not for the RR. In this thesis, these limitations have been overcome, and another Bayesian estimator of the RR, the mode, presented less bias and mean squared error in general. The models mentioned above are suitable for analysis of independent observations; however there are cases where this assumption is not valid, as in clustered randomized trials or multilevel modeling. Only five papers were found with proposals of how to estimate the RR in these cases. When the interest is on estimation of the RR with polytomous outcomes, only two studies presented suggestions. In this work, the Bayesian methodology proposed for binary outcomes and independent data was extended to deal with these two situations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Devulder, Antoine. „Involuntary unemployment and financial frictions in estimated DSGE models“. Thesis, Paris 1, 2016. http://www.theses.fr/2016PA01E016/document.

Der volle Inhalt der Quelle
Annotation:
L’utilisation de modèles DSGE, construits à partir de comportements micro-fondés des agents économiques, s'est progressivement imposée aux institutions pour l'analyse macroéconomique du cycle d'affaires et l'évaluation de politiques, grâce à leur cohérence interne. La crise financière récente et la préoccupation que représente la persistance du chômage à un niveau élevé plaident en faveur de modèles qui tiennent compte des ajustements imparfaits de l'offre et de la demande sur les marchés du crédit et du travail. Pourtant, des modèles relativement rudimentaires dans leur représentation de ces marchés, comme celui de Smets et Wouters (2003-2007), reproduisent aussi bien les données que des modèles économétriques usuels. On peut donc légitimement s'interroger sur l'intérêt de prendre en compte ces frictions dans la spécification des modèles théoriques destinés à l'analyse économique opérationnelle. Dans cette thèse, je réponds à cette question en montrant que l'inclusion de mécanismes microfondés, spécifiques aux marchés du crédit et du travail peut modifier très significativement les conclusions obtenues à partir d'un modèle DSGE estimé, tant d'un point de vue positif que normatif. Pour cela, je construis un modèle à deux pays de la France et du reste de la zone euro, avec un reste du monde exogène, et l'estime avec et sans ces deux frictions, en utilisant une approche hayésienne. Par rapport aux modèles existant dans la littérature, je propose deux améliorations à la spécification du marché du travail. Premièrement, suivant Pissarides (2009), le salaire réel moyen est rendu rigide en supposant que seuls les nouveaux employés renégocient leur rémunération. Deuxièmement, le taux de participation sur le marché du travail est rendu endogène et le chômage involontaire, dans le sens où le bien-être des chômeurs est inférieur à celui des employés. L'inclusion de ce dernier mécanisme dans le modèle estimé fera cependant I'objet de travaux futurs.Afin de mettre en évidence les effets des frictions sur les marches du crédit et du travail, je soumets les quatre versions estimées du modèle à plusieurs exercices: une analyse en contributions des chocs structurels pendant la crise. L'évaluation de différentes règles de politique monétaire, la simulation contrefactuelle de la crise sous l'hypothèse d'un régime de change flexible entre la France et le reste de la zone euro et, enfin. la simulation de variante de TVA sociale
Thanks to their internal consistency. DSGE models, built on microecoc behavor, have become prevalenl for business cycle and policy analysis in institutions. The recent crisis and governments' concern about persistent unemployment advocate for mechanism, capturing imperfect adjustments in credit and labor markets. However, popular models such as the one of Smets and Wouters (2003-2007), although unsophisticated in their representation of these markets, are able to replicate the data as well as usual econometric tools. It is thus necessary to question the benefits of including these frictions in theoretical models for operational use.ln this thesis, I address this issue and show that microfounded mechanisms specifiç to labor and credit markets can significantly alter the conclusions based on the use of an estimated DSGE model, fom both a positive and a normative perspective.For this purpose, I build a two-country model of France and the rest of the euro area with exogenous rest of the world variables, and estimate it with and without these two frictions using Bayesian techniques. By contrast with existing models, I propose two improvements of the representation of labor markets. First, following Pissarides (2009), only wages in new jobs are negotiated by firms and workers, engendering stickiness in the average real wage. Second, I develop a set of assumptions to make labor market participation endogenous and unemployment involuntary in the sense that the unemployed workers are worse-off that the employed ones. Yet, including this setup in the estimated model is left for future research.Using the four estimated versions of the model, I undertake a number of analyses to highlight the role of financial and labor market frictions : an historical shock decomposition of fluctuations during the crisis, the evaluation of several monetary policy rules, a counterfactual simulation of the crisis under the assumption of a flexible exchange rate regime between France and the rest of the euro area and, lastly, the simulation of social VAT scenarios
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Otieno, Wilkistar. „A Framework for Determining the Reliability of Nanoscale Metallic Oxide Semiconductor (MOS) Devices“. Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3499.

Der volle Inhalt der Quelle
Annotation:
An increase in worldwide investments during the past several decades has pro-pelled scienti c breakthroughs in nanoscience and technology research to new and exciting levels. To ensure that these discoveries lead to commercially viable prod-ucts, it is important to address some of the fundamental engineering and scientific challenges related to nanodevices. Due to the centrality of reliability to product integrity, nanoreliability requires critical analysis and understanding to ensure long-term sustainability of nanodevices and systems. In this study, we construct a relia-bility framework for nanoscale dielectric lms used in Metallic Oxide Semiconductor (MOS) devices. The successful fabrication and incorporation of metallic oxides in MOS devices was a major milestone in the electronics industry. However, with the progressive scaling of transistors, the dielectric dimension has progressively decreased to about 2nm. This reduction has had severe reliability implications and challenges including: short channeling e ects and leakage currents due to quantum-mechanical tunneling which leads to increased power dissipation and eventually temperature re-lated gate degradation. We develop a framework to characterize and model reliability of recently devel-oped gate dielectrics of Si-MOS devices. We accomplish this through the following research steps: (i) the identi cation of the failure mechanisms of Si-based high-k gates (stress, material, environmental), (ii) developing a 3-D failure simulation as a way to acquire simulated failure data, (iii) the identi cation of the dielectric failure prob-ability structure using both kernel estimation and nonparametric Bayesian schemes so as to establish the life pro le of high-k gate dielectric. The goal is to eventually develop the appropriate failure extrapolation model to relate the reliability at the test conditions to the reliability at normal use conditions. This study provides modeling and analytical clarity regarding the inherent failure characteristics and hence the reliability of metal/high-k gate stacks of Si-based sub-strates. In addition, this research will assist manufacturers to optimally characterize, predict and manage the reliability of metal high-k gate substrates. The proposed reliability framework could be extended to other thin lm devices and eventually to other nanomaterials and devices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Costa, Eliardo Guimarães da. „Tamanho amostral para estimar a concentração de organismos em água de lastro: uma abordagem bayesiana“. Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-05072018-164225/.

Der volle Inhalt der Quelle
Annotation:
Metodologias para obtenção do tamanho amostral para estimar a concentração de organismos em água de lastro e verificar normas internacionais são desenvolvidas sob uma abordagem bayesiana. Consideramos os critérios da cobertura média, do tamanho médio e da minimização do custo total sob os modelos Poisson com distribuição a priori gama e binomial negativo com distribuição a priori Pearson Tipo VI. Além disso, consideramos um processo Dirichlet como distribuição a priori no modelo Poisson com o propósito de obter maior flexibilidade e robustez. Para fins de aplicação, implementamos rotinas computacionais usando a linguagem R.
Sample size methodologies for estimating the organism concentration in ballast water and for verifying international standards are developed under a Bayesian approach. We consider the criteria of average coverage, of average length and of total cost minimization under the Poisson model with a gamma prior distribution and the negative binomial model with a Pearson type VI prior distribution. Furthermore, we consider a Dirichlet process as a prior distribution in the Poisson model with the purpose to gain more flexibility and robustness. For practical applications, we implemented computational routines using the R language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

He, Qing. „Investigating the performance of process-observation-error-estimator and robust estimators in surplus production model: a simulation study“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/76859.

Der volle Inhalt der Quelle
Annotation:
This study investigated the performance of the three estimators of surplus production model including process-observation-error-estimator with normal distribution (POE_N), observation-error-estimator with normal distribution (OE_N), and process-error-estimator with normal distribution (PE_N). The estimators with fat-tailed distributions including Student's t distribution and Cauchy distribution were also proposed and their performances were compared with the estimators with normal distribution. This study used Bayesian method, revised Metropolis Hastings within Gibbs sampling algorithm (MHGS) that was previously used to solve POE_N (Millar and Meyer, 2000), developed the MHGS for the other estimators, and developed the methodologies which enabled all the estimators to deal with data containing multiple indices based on catch-per-unit-effort (CPUE). Simulation study was conducted based on parameter estimation from two example fisheries: the Atlantic weakfish (Cynoscion regalis) and the black sea bass (Centropristis striata) southern stock. Our results indicated that POE_N is the estimator with best performance among all six estimators with regard to both accuracy and precision for most of the cases. POE_N is also the robust estimator to outliers, atypical values, and autocorrelated errors. OE_N is the second best estimator. PE_N is often imprecise. Estimators with fat-tailed distribution usually result in some estimates more biased than estimators with normal distribution. The performance of POE_N and OE_N can be improved by fitting multiple indices. Our study suggested that POE_N be used for population dynamic models in future stock assessment. Multiple indices from valid surveys should be incorporated into stock assessment models. OE_N can be considered when multiple indices are available.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Heywood, Ben. „Investigations into the use of quantified Bayesian maximum entropy methods to generate improved distribution maps and biomass estimates from fisheries acoustic survey data /“. St Andrews, 2008. http://hdl.handle.net/10023/512.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Patel, Ekta, Gurtina Besla und Kaisey Mandel. „Orbits of massive satellite galaxies - II. Bayesian estimates of the Milky Way and Andromeda masses using high-precision astrometry and cosmological simulations“. OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/624428.

Der volle Inhalt der Quelle
Annotation:
In the era of high-precision astrometry, space observatories like the Hubble Space Telescope (HST) and Gaia are providing unprecedented 6D phase-space information of satellite galaxies. Such measurements can shed light on the structure and assembly history of the Local Group, but improved statistical methods are needed to use them efficiently. Here we illustrate such a method using analogues of the Local Group's two most massive satellite galaxies, the Large Magellanic Cloud (LMC) and Triangulum (M33), from the Illustris dark-matter-only cosmological simulation. We use a Bayesian inference scheme combining measurements of positions, velocities and specific orbital angular momenta (j) of the LMC/M33 with importance sampling of their simulated analogues to compute posterior estimates of the Milky Way (MW) and Andromeda's (M31) halo masses. We conclude that the resulting host halo mass is more susceptible to bias when using measurements of the current position and velocity of satellites, especially when satellites are at short-lived phases of their orbits (i.e. at pericentre). Instead, the j value of a satellite is well conserved over time and provides a more reliable constraint on host mass. The inferred virial mass of the MW(M31) using j of the LMC (M33) is M-vir,M- MW = 1.02(-0.55)(+0.77) x 10(12) M-circle dot (M-vir,M- M31 = 1.37(-0.75)(+1.39) x 10(12) M-circle dot). Choosing simulated analogues whose j values are consistent with the conventional picture of a previous (<3 Gyr ago), close encounter (<100 kpc) of M33 about M31 results in a very low virial mass for M31 (similar to 10(12) M-circle dot). This supports the new scenario put forth in Patel, Besla & Sohn, wherein M33 is on its first passage about M31 or on a long-period orbit. We conclude that this Bayesian inference scheme, utilizing satellite j, is a promising method to reduce the current factor of 2 spread in the mass range of the MW and M31. This method is easily adaptable to include additional satellites as new 6D phase-space information becomes available from HST, Gaia and the James Webb Space Telescope.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Almeida, Josemir Ramos de. „Estima??o cl?ssica e Bayesiana em modelos de sobrevida com fra??o de cura“. Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17012.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-12-17T15:26:39Z (GMT). No. of bitstreams: 1 JosemirRA_DISSERT.pdf: 5246271 bytes, checksum: fe59e69070ae193e15ab79d805c3b449 (MD5) Previous issue date: 2013-03-22
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates
Em An?lise de Sobreviv?ncia, os modelos de longa dura??o permitem a estima??o da fra??o de cura, que representa uma parcela da popula??o imune ao evento de interesse. No referido trabalho abordamos os enfoques cl?ssico e Bayesiano com base nos modelos de mistura padr?o e de tempo de promo??o, utilizando diferentes distribui??es (exponencial, Weibull e Pareto) para modelar os tempos de falhas. A base de dados utilizada para ilustrar as implementa??es ? descrita em Kersey et al. (1987) e consiste em um grupo de pacientes com leucemia que foram submetidos a um certo tipo de transplante. As implementa??es espec?ficas utilizadas foram de otimiza??o num?rica por BFGS implementado em R (base::optim), aproxima??o de Laplace (implementa??o pr?pria) e o amostrador de Gibbs implementado no Open- Bugs. Descrevemos as principais caracter?sticas dos modelos utilizados, os m?todos de estima??o e os aspectos computacionais. Tamb?m discutimos como diferentes prioris podem afetar nas estimativas Bayesianas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Ilangakoon, Nayani Thanuja. „Relationship between leaf area index (LAI) estimated by terrestrial LiDAR and remotely sensed vegetation indices as a proxy to forest carbon sequestration“. Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1402857524.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Fässler, Sascha M. M. „Target strength variability in Atlantic herring (Clupea harengus) and its effect on acoustic abundance estimates“. Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/1703.

Der volle Inhalt der Quelle
Annotation:
Acoustic survey techniques are widely used to quantify abundance and distribution of a variety of pelagic fish such as herring (Clupea harengus). The information provided is becoming increasingly important for stock assessment and ecosystem studies, however, the data collected are used as relative indices rather than absolute measures, due to the uncertainty of target strength (TS) estimates. A fish’s TS is a measure of its capacity to reflect sound and, therefore, the TS value will directly influence the estimate of abundance from an acoustic survey. The TS is a stochastic variable, dependent on a range of factors such as fish size, orientation, shape, physiology, and acoustic frequency. However, estimates of mean TS, used to convert echo energy data from acoustic surveys into numbers of fish, are conveniently derived from a single metric - the fish length (L). The TS used for herring is based on TS-L relationships derived from a variety of experiments on dead and caged fish, conducted 25-30 years ago. Recently, theoretical models for fish backscatter have been proposed to provide an alternative basis for exploring fish TS. Another problem encountered during acoustic surveys is the identification of insonified organisms. Trawl samples are commonly collected for identification purposes, however, there are several selectivity issues associated with this method that may translate directly into biased acoustic abundance estimates. The use of different acoustic frequencies has been recognised as a useful tool to distinguish between different species, based on their sound reflection properties at low and high frequencies. In this study I developed theoretical models to describe the backscatter of herring at multiple frequencies. Data collected at four frequencies (18, 38, 120 and 200 kHz) during standard acoustic surveys for herring in the North Sea were examined and compared to model results. Multifrequency backscattering characteristics of herring were described and compared to those of Norway pout, a species also present in the survey area. Species discrimination was attempted based on differences in backscatter at the different frequencies. I examined swimbladder morphology data of Baltic and Atlantic herring and sprat from the Baltic Sea. Based on these data, I modelled the acoustic backscatter of both herring stocks and attempted to explain differences previously observed in empirical data. I investigated the change in swimbladder shape of herring, when exposed to increased water pressures at deeper depths, by producing true shapes of swimbladders from MRI scans of herring under pressure. The swimbladder morphology representations in 3-D were used to model the acoustic backscatter at a range of frequencies and water pressures. I developed a probabilistic TS model of herring in a Bayesian framework to account for uncertainty associated with TS. Most likely distributions of model parameters were determined by fitting the model to in situ data. The resulting probabilistic TS was used to produce distributions of absolute abundance and biomass estimates, which were compared to official results from ICES North Sea herring stock assessment. Modelled backscatter levels of herring from the Baltic Sea were on average 2.3 dB higher than those from herring living in northeast Atlantic waters. This was attributed to differences in swimbladder sizes between the two herring stocks due to the lower salinity Baltic Sea compared to Atlantic waters. Swimbladders of Baltic herring need to be bigger to achieve a certain degree of buoyancy. Morphological swimbladder dimensions of Baltic herring and sprat were found to be different. Herring had a significantly larger swimbladder height at a given length compared to sprat, resulting in a modelled TS that was on average 1.2 dB stronger. Water depth, and therefore the increase in ambient pressure, was found to have a considerable effect on the size and shape of the herring swimbladder. Modelled TS values were found to be around 3 dB weaker at a depth of 50 m compared to surface waters. At 200 m, this difference was estimated to be about 5 dB. The Bayesian model predicted mean abundances and biomass were 23 and 55% higher, respectively, than the ICES estimates. The discrepancy was linked to the depth-dependency of the TS model and the particular size-dependent bathymetric distribution of herring in the survey area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Silva, Karolina Barone Ribeiro da. „Estimativas de máxima verosimilhança e bayesianas do número de erros de um software“. Universidade Federal de São Carlos, 2006. https://repositorio.ufscar.br/handle/ufscar/4498.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2016-06-02T20:05:58Z (GMT). No. of bitstreams: 1 DissKBRS.pdf: 617246 bytes, checksum: 9436ee8984a49f5df072023b717747c6 (MD5) Previous issue date: 2006-02-24
In this work we present the methodology of capture-recapture, under the classic and bayesian approach, to estimate the number of errors of software through inspection by distinct reviewers. We present the general statistical model considering independence among errors and among reviewers and consider the particular cases of equally detectable errors (homogeneous) and reviewers not equally e¢ cient (heterogeneous) and of errors not equally detectable (heterogeneous) and equally e¢ cient reviewers (homogeneous). After that, under the assumption of independence and heterogeneity among errors and independence and homogeneity among reviwers, we supposed that the heterogeneity of the errors was expressed by a classification of these in easy and di¢ cult of detecting, admitting known the probabilities of detection of an easy error and of a di¢ cult error. Finally, under the hypothesis of independence and homogeneity among errors, we presented a new model considering heterogeneity and dependence among reviewers. Besides, we presented examples with simulate and real data.
Nesta dissertação apresentamos a metodologia de captura-recaptura, sob os enfoques clássico e bayesiano, para estimar o número de erros de um software através de sua inspeção por revisores distintos. Apresentamos o modelo estatístico geral considerando independência entre erros e entre revisores e consideramos os casos particulares de erros igualmente.detectáveis (homogêneos) e revisores não igualmente eficientes (heterogêneos) e de erros não igualmente detectáveis (heterogêneos) e revisores igualmente eficientes (homogêneos). Em seguida, sob a hipótese de heterogeneidade e independência entre erros e homogeneidade e independência entre revisores, supusemos que a heterogeneidade dos erros era expressa por uma classificação destes em fácil e difícil de detectar, admitindo conhecidas as probabilidades de detecção de um erro fácil e de um erro difícil. Finalmente, sob a hipótese de independência e homogeneidade entre erros, apresentamos um novo modelo considerando heterogeneidade e dependência entre revisores. Além disso, apresentamos exemplos com dados simulados e reais.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Bruzzone, Andrea. „P-SGLD : Stochastic Gradient Langevin Dynamics with control variates“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140121.

Der volle Inhalt der Quelle
Annotation:
Year after years, the amount of data that we continuously generate is increasing. When this situation started the main challenge was to find a way to store the huge quantity of information. Nowadays, with the increasing availability of storage facilities, this problem is solved but it gives us a new issue to deal with: find tools that allow us to learn from this large data sets. In this thesis, a framework for Bayesian learning with the ability to scale to large data sets is studied. We present the Stochastic Gradient Langevin Dynamics (SGLD) framework and show that in some cases its approximation of the posterior distribution is quite poor. A reason for this can be that SGLD estimates the gradient of the log-likelihood with a high variability due to naïve sampling. Our approach combines accurate proxies for the gradient of the log-likelihood with SGLD. We show that it produces better results in terms of convergence to the correct posterior distribution than the standard SGLD, since accurate proxies dramatically reduce the variance of the gradient estimator. Moreover, we demonstrate that this approach is more efficient than the standard Markov Chain Monte Carlo (MCMC) method and that it exceeds other techniques of variance reduction proposed in the literature such as SAGA-LD algorithm. This approach also uses control variates to improve SGLD so that it is straightforward the comparison with our approach. We apply the method to the Logistic Regression model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Justino, Josivan Ribeiro. „Estimativas de mortalidade para a regi?o nordeste do Brasil em 2010: uma associa??o do m?todo demogr?fico equa??o geral de balanceamento, com o estimador bayesiano emp?rico“. Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/13859.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-12-17T14:23:30Z (GMT). No. of bitstreams: 1 JosivanRJ_DISSERT.pdf: 3858170 bytes, checksum: cf220eba177815a3f2e7efdc0fc51b69 (MD5) Previous issue date: 2013-08-15
One of the greatest challenges of demography, nowadays, is to obtain estimates of mortality, in a consistent manner, mainly in small areas. The lack of this information, hinders public health actions and leads to impairment of quality of classification of deaths, generating concern on the part of demographers and epidemiologists in obtaining reliable statistics of mortality in the country. In this context, the objective of this work is to obtain estimates of deaths adjustment factors for correction of adult mortality, by States, meso-regions and age groups in the northeastern region, in 2010. The proposal is based on two lines of observation: a demographic one and a statistical one, considering also two areas of coverage in the States of the Northeast region, the meso-regions, as larger areas and counties, as small areas. The methodological principle is to use the General Equation and Balancing demographic method or General Growth Balance to correct the observed deaths, in larger areas (meso-regions) of the states, since they are less prone to breakage of methodological assumptions. In the sequence, it will be applied the statistical empirical Bayesian estimator method, considering as sum of deaths in the meso-regions, the death value corrected by the demographic method, and as reference of observation of smaller area, the observed deaths in small areas (counties). As results of this combination, a smoothing effect on the degree of coverage of deaths is obtained, due to the association with the empirical Bayesian Estimator, and the possibility of evaluating the degree of coverage of deaths by age groups at counties, meso-regions and states levels, with the advantage of estimete adjustment factors, according to the desired level of aggregation. The results grouped by State, point to a significant improvement of the degree of coverage of deaths, according to the combination of the methods with values above 80%. Alagoas (0.88), Bahia (0.90), Cear? (0.90), Maranh?o (0.84), Para?ba (0.88), Pernambuco (0.93), Piau? (0.85), Rio Grande do Norte (0.89) and Sergipe (0.92). Advances in the control of the registry information in the health system, linked to improvements in socioeconomic conditions and urbanization of the counties, in the last decade, provided a better quality of information registry of deaths in small areas
Um dos grandes desafios atuais da demografia ? obter estimativas de mortalidade, de maneira consistente, principalmente em pequenas ?reas. A car?ncia destas informa??es, dificulta a??es de sa?de p?blica e leva ao comprometimento da qualidade de classifica??o de ?bitos, gerando preocupa??o por parte dos dem?grafos e epidemiologistas na obten??o de estat?sticas confi?veis da mortalidade no Pa?s. Neste contexto, o objetivo deste trabalho ? obter estimativas de fatores de ajuste de ?bitos para corre??o da mortalidade adulta, por estados, mesorregi?es e grupos et?rios na regi?o nordeste, em 2010. A proposta est? pautada sobre duas linhas de observa??o: uma demogr?fica e outra estat?stica, considerando tamb?m duas ?reas de abrang?ncia nos estados da regi?o Nordeste, as mesorregi?es como ?reas maiores e os munic?pios como pequenas ?reas. O principio metodol?gico ? usar o m?todo demogr?fico Equa??o Geral e Balanceamento ou General Growth Balance, para corrigir os ?bitos observados, nas ?reas maiores (mesorregi?es) dos estados, por estas serem regi?es menos prop?cias a quebra dos pressupostos metodol?gicos. Em seguida, ser? aplicado o m?todo estat?stico estimador bayesiano emp?rico, considerando como soma dos ?bitos nas mesorregi?es, o valor de ?bito corrigido pelo m?todo demogr?fico e como refer?ncia de observa??o de ?rea menor os ?bitos observados nas pequenas ?reas (munic?pios). Como resultados desta combina??o, um efeito de suaviza??o do grau de cobertura dos ?bitos ? obtido, fruto da associa??o com o estimador bayesiano emp?rico e a possibilidade de avaliar o grau de cobertura de ?bitos por grupos et?rios em n?vel de munic?pios, mesorregi?es e estado, com a vantagem de estimar fatores de ajuste, conforme o n?vel de agrega??o desejado. Os resultados agrupados por estado, apontam para uma melhora significante do grau de cobertura de ?bitos, segundo a combina??o dos m?todos com valores acima de 80%. Alagoas (0,88), Bahia (0,90), Cear? (0,90), Maranh?o (0,84), Para?ba (0,88), Pernambuco (0,93), Piau? (0,85) , Rio Grande do Norte (0,89) e Sergipe (0,92). Os avan?os no controle do registro das informa??es no sistema de sa?de, associado ?s melhorias nas condi??es socioecon?micas e de urbaniza??o dos munic?pios, na ?ltima d?cada, proporcionaram uma melhor qualidade do registro das informa??es de ?bitos nas pequenas ?reas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Missiagia, Juliano Gallina. „Estimação Bayesiana do tamanho de uma população de diabéticos através de listas de pacientes“. Universidade Federal de São Carlos, 2005. https://repositorio.ufscar.br/handle/ufscar/4550.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2016-06-02T20:06:05Z (GMT). No. of bitstreams: 1 4034.pdf: 873658 bytes, checksum: 8c8e2d629291b4edab052dd0ee734f94 (MD5) Previous issue date: 2005-02-25
Financiadora de Estudos e Projetos
In this work, a bayesian methodology is shown to estimate the size of a diabethic-su¤ering population through lists containing information data of patients. The applied methodology is analogous of capture-recaptures in animal population. We assume correct the registers of relative information to the patients as well as we take in account correct and incorrect registers of the information. In case the supposed registers are correct, the methodology is developed for two or more lists and the Bayes estimate is determined for the size of a population. In a second model, the occurrency of correct and incorrect registers are considered, presenting a two-stage estimation method for the model parameters using two lists. For both models there are results with simulated and real examples.
Nesta dissertação apresentamos uma metodologia bayesiana para estimar o tamanho de uma população de diabéticos através de listas contendo informações sobre dados dos indivíduos. A metodologia aplicada é análoga a de captura-recaptura em população animal. Supomos corretos os registros de informações relativas aos pacientes assim como levamos em consideração registros corretos e incorretos das informações. No caso da suposição dos registros serem corretos, a metodologia é desenvolvida para duas ou mais listas e determinamos estimativas de Bayes para o tamanho populacional. Em um segundo modelo, consideramos a ocorrência de registros corretos e incorretos dos dados relativos aos pacientes, e apresentamos um método de estimação em dois estágios para os parâmetros do modelo utilizando duas listas. Para ambos os modelos, apresentamos resultados com exemplos simulados e reais.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Andrade, Chávez Francisco Mauricio. „Modelo de regresión Dirichlet bayesiano: aplicación para estimar la prevalencia del nivel de anemia infantil en centros poblados del Perú“. Master's thesis, Pontificia Universidad Católica del Perú, 2020. http://hdl.handle.net/20.500.12404/18683.

Der volle Inhalt der Quelle
Annotation:
La anemia es una afección causada por un bajo nivel de hemoglobina en la sangre causada principalmente por un déficit en el consumo de hierro. En el Perú, es un problema de salud pública y nutrición principalmente en niñas y niños menores de cinco años, por ello el Instituto Nacional de Estadística (INEI) realiza una prueba para determinar anemia en niñas y niños a través de la Encuesta Demográfica y de Salud Familiar (ENDES). En esta encuesta se clasifica los niveles de anemia como severa si es menor a 7,0 g/dl, moderada si está entre 7,0 y 9,9 g/dl o leve si varía entre 10,0 y 11,9 g/dl. En este contexto, en esta tesis se propone aplicar el modelo de regresión de Dirichlet para estimar la prevalencia de los niveles de anemia infantil a nivel de centros poblados en el año 2017. Se propone estimar los parámetros usando inferencia bayesiana, a través del método Halmitoniano de Monte Carlo (HMC) usando Rstan. El modelo propuesto también permite identificar posibles factores determinantes de la prevalencia de la anemia infantil y tiene el propósito de mejorar las políticas públicas dirigidas a la reducción de la anemia en el país.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Top, Alioune. „Estimation paramétriques et tests d'hypothèses pour des modèles avec plusieurs ruptures d'un processus de poisson“. Thesis, Le Mans, 2016. http://www.theses.fr/2016LEMA1014/document.

Der volle Inhalt der Quelle
Annotation:
Ce travail est consacré aux problèmes d’estimation paramétriques, aux tests d’hypothèses et aux tests d’ajustement pour les processus de Poisson non homogènes.Tout d’abord on a étudié deux modèles ayant chacun deux sauts localisés par un paramètre inconnu. Pour le premier modèle la somme des sauts est positive. Tandis que le second a un changement de régime et constant par morceaux. La somme de ses deux sauts est nulle. Ainsi pour chacun de ces modèles nous avons étudié les propriétés asymptotiques de l’estimateur bayésien (EB) et celui du maximum de vraisemblance(EMV). Nous avons montré la consistance, la convergence en distribution et la convergence des moments. En particulier l’estimateur bayésien est asymptotiquement efficace. Pour le second modèle nous avons aussi considéré le test d’une hypothèse simple contre une alternative unilatérale et nous avons décrit les propriétés asymptotiques (choix du seuil et puissance ) du test de Wald (WT)et du test du rapport de vraisemblance généralisé (GRLT).Les démonstrations sont basées sur la méthode d’Ibragimov et Khasminskii. Cette dernière repose sur la convergence faible du rapport de vraisemblance normalisé dans l’espace de Skorohod sous certains critères de tension des familles demesure correspondantes.Par des simulations numériques, les variances limites nous ont permis de conclure que l’EB est meilleur que celui du EMV. Lorsque la somme des sauts est nulle, nous avons développé une approche numérique pour le EMV.Ensuite on a considéré le problème de construction d’un test d’ajustement pour un modèle avec un paramètre d’échelle. On a montré que dans ce cas, le test de Cramer-von Mises est asymptotiquement ”parameter-free” et est consistent
This work is devoted to the parametric estimation, hypothesis testing and goodnessof-fit test problems for non homogenous Poisson processes. First we consider two models having two jumps located by an unknown parameter.For the first model the sum of jumps is positive. The second is a model of switching intensity, piecewise constant and the sum of jumps is zero. Thus, for each model, we studied the asymptotic properties of the Bayesian estimator (BE) andthe likelihood estimator (MLE). The consistency, the convergence in distribution and the convergence of moments are shown. In particular we show that the BE is asymptotically efficient. For the second model we also consider the problem of asimple hypothesis testing against a one- sided alternative. The asymptotic properties (choice of the threshold and power) of Wald test (WT) and the generalized likelihood ratio test (GRLT) are described.For the proofs we use the method of Ibragimov and Khasminskii. This method is based on the weak convergence of the normalized likelihood ratio in the Skorohod space under some tightness criterion of the corresponding families of measure.By numerical simulations, the limiting variances of estimators allows us to conclude that the BE outperforms the MLE. In the situation where the sum of jumps is zero, we developed a numerical approach to obtain the MLE.Then we consider the problem of construction of goodness-of-test for a model with scale parameter. We show that the Cram´er-von Mises type test is asymptotically parameter-free. It is also consistent
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Kubo, Hisahiko. „Study on rupture processes of large interplate earthquakes estimated by fully Bayesian source inversions using multi period-band strong-motion data -The 2011 Tohoku-oki and the 2011 Ibaraki-oki earthquakes-“. 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199110.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Molinari, Benedetto. „Sticky information and non-pricing policies in DSGE models“. Doctoral thesis, Universitat Pompeu Fabra, 2008. http://hdl.handle.net/10803/7379.

Der volle Inhalt der Quelle
Annotation:
La tesis consta de dos partes. En la primera parte se analiza la relación entre las fricciones en los flujos de información que llegan a la empresa y la persistencia del patrón de la inflación. En particular, se presenta un nuevo estimador por el modelo de Makiw y Reis (2002) "Sticky Information Phillips Curve", y se aplica usando datos trimestrales de EE.UU. El resultado principal es que el modelo tan solo puede explicar la persistencia de la inflación asumiendo que la variancia de la inflación sea mucho mas grande de la que observamos o, equivalentemente, que el modelo no puede explicar conjuntamente la variancia y la persistencia de la inflación.

En la segunda parte se presentan nuevas evidencias sobre la publicidad agregada en EE.UU. y se estudian los efectos de la publicidad en la economía usando un modelo dinámico estocástico de equilibrio general. En particular, el capitulo 2 se enfoca en las relaciones de corto plazo entre las mas comunes variables macroeconómicas - consumo agregado, producto interno bruto, totalidad de horas trabajadas en la economía - y la publicidad agregada, con particular atención a la relación de causalidad entre publicidad y consumo. En cambio, el capitulo 3 se enfoca sobre las relaciones de largo plazo, enseñando como la publicidad agregada afecte el nivel de trabajo de la economía. A través del modelo presentado en el capitulo 2, se demuestra que un mayor nivel de publicidad implica un mayor números de oras trabajadas asociadas con un menor nivel de bienestar por los consumidores.
This thesis is organized in two parts. In the first one, I seek to understand the relationship between frictions in information flows among firms and inflation persistence. To this end, I present a novel estimator for the Sticky Information Phillips Curve (Mankiw and Reis, 2002), and I use it to estimate this model with U.S. postwar data. The main result is that the Sticky Information Phillips Curve can match inflation persistence only at the cost of mispredicting inflation variance. I conclude that the Sticky Information Phillips Curve is a valid model to explain inflation persistence but not an overall valid theory of inflation.

The second part presents new evidence about aggregate advertising expenditures in U.S., and analyzes the effect of advertising in the aggregate economy by the mean of a dynamic stochastic general equilibrium model. Chapter 2 focuses on the short run impact of advertising on the aggregate dynamics, and shows that an increase in aggregate advertising significantly increases the aggregate consumption. Chapter 3 focuses on the long run effects of advertising on the labor supply, showing that in economies where aggregate advertising is higher, agents supply more hours of works and are generally worse off in terms of welfare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Pereira, da Silva Hélio Doyle. „Aplicación de modelos bayesianos para estimar la prevalencia de enfermedad y la sensibilidad y especificidad de tests de diagnóstico clínico sin gold standard“. Doctoral thesis, Universitat de Barcelona, 2016. http://hdl.handle.net/10803/523505.

Der volle Inhalt der Quelle
Annotation:
Dos objetivos claves de la investigación diagnóstica son estimar con exactitud y precisión la prevalencia de la enfermedad y la sensibilidad y especificidad de tests diagnósticos. Se han desarrollado modelos de clases latentes que tienen en cuenta la correlación entre las medidas de los individuos determinadas con diferentes tests con el fin de diagnosticar enfermedades para las cuales no están disponibles tests gold standard. En algunos estudios clínicos, se hacen varias medidas del mismo individuo con el mismo test en las mismas condiciones y, por tanto, las mediciones replicadas para cada individuo no son independientes. En esta tesis se propone una extensión Bayesiana del modelo de clases latentes de efectos aleatorios de Gauss para ajustar a los datos de tests con resultados binarios y con medidas replicadas por individuo. Se describe una aplicación que utiliza los datos recogidos de personas infectadas por parásitos intestinales Hookworm llevada a cabo en el municipio de Presidente Figueiredo, Estado de Amazonas en Brasil. Además, a través de un estudio de simulación se comparó el desempeño del modelo propuesto con los modelos actuales (el modelo de efectos aleatorios individuo y modelos de dependencia e independencia condicional). Como era de esperar, el modelo propuesto presenta una mayor exactitud y precisión en las estimaciones de prevalencia, sensibilidad y especificidad. Para un control adecuado de las enfermedades la Organización Mundial de la Salud ha propuesto el diagnóstico y tratamiento de la infección tuberculosa latente (LTBI) en grupos de riesgo de desarrollar la enfermedad, como los niños. No existe un test gold standard para el diagnóstico de la infección latente. Los modelos estadísticos basados en la estimación de clases latentes permiten la evaluación de la prevalencia de la infección y la validez de los tests utilizados en ausencia de un gold standard. Se realizó un estudio transversal con niños de hasta 6 años de edad que habían sido vacunados con la BCG en Manaus, Amazonas-Brasil. El objetivo de dicho estudio fue estimar la prevalencia de la infección latente en los niños pequeños en contacto con un caso indice de tuberculosis en el hogar (TB-HCC) y determinar la validez y la seguridad del test cutáneo de tuberculina (TST) y QuantiFERON-TB Gold-in-tube (QFT), utilizando modelos de clases latentes. Para las estimaciones, en una primera fase se consideró la correlación entre los dos tests, y en la segunda fase se consideró la prevalencia en función de la intensidad y del tiempo de exposición. El cincuenta por ciento de los niños con TB-HCC tenía LTBI, con la prevalencia en función del tiempo y la intensidad de la exposición al caso índice. La sensibilidad y la especificidad de TST fueron del 73 % [intervalo de confianza del 95 % (IC): 53-91] y el 97 % (IC del 95 %: 89-100), respectivamente, frente al 53 % (IC del 95 %: 41-66) y el 81 % (IC del 95 %: 71-90) para QFT. El valor predictivo positivo de TST en niños con TB-HCC fue del 91 % (IC del 95 %: 61-99), y para QFT fue del 74 % (IC del 95 %: 47-95). Este es uno de los primeros estudios que usa modelos de clases latentes para estimar la prevalencia de la infección por M. tuberculosis en niños y los parámetros de sus principales tests diagnósticos. Los resultados sugieren que los niños en contacto con un caso índice tienen un alto riesgo de infección. La validez y los valores predictivos no mostraron diferencias significativas según el test aplicado. El uso combinado de los dos tests en nuestro estudio mostró una sutil mejoría en el diagnóstico de la LTBI.
Two key aims of diagnostic research are to accurately and precisely estimate disease prevalence and test sensitivity and specificity. Latent class models have been proposed that consider the correlation between subject measures determined by different tests in order to diagnose diseases for which gold standard tests are not available. In some clinical studies, several measures of the same subject are made with the same test under the same conditions (replicated measurements) and thus, replicated measurements for each subject are not independent. In the present study, we propose an extension of the Bayesian latent class Gaussian random effects model to fit the data with binary outcomes for tests with replicated subject measures. We describe an application using data collected on hookworm infection carried out in the municipality of Presidente Figueiredo, Amazonas State, Brazil. In addition, the performance of the proposed model was compared with that of current models (the subject random effects model and the conditional (in)dependent model) through a simulation study. As expected, the proposed model presented better accuracy and precision in the estimations of prevalence, sensitivity and specificity. For adequate disease control the World Health Organization has proposed the diagnosis and treatment of latent tuberculous infection (LTBI) in groups of risk of developing the disease such as children. There is no gold standard (GS) test for the diagnosis of LTBI. Statistical models based on the estimation of latent class allow evaluation of the prevalence of infection and the accuracy of the tests used in the absence of a GS. We conducted a cross-sectional study with children up to 6 years of age who had been vaccinated with the BCG in Manaus, Amazonas- Brazil. The objective of this study was to estimate the prevalence of LTBI in young children in contact with a household case of tuberculosis (TB-HCC) and determine the accuracy and precision of the Tuberculin Skin Test (TST) and QuantiFERON-TB Gold in-tube (QFT) using the latent class model. Fifty percent of the children with TB-HCC had LTBI, with the pre- valence depending on the intensity and length of exposure to the index case. The sensitivity and specificity of TST were 73 % [95 % confidence interval (CI): 53-91] and 97 % (95 % CI: 89-100), respectively, versus 53 % (95 % CI: 41-66) and 81 % (95 % CI: 71-90) for QFT. The positive predictive value of TST in children with TB-HCC was 91 % (95 % CI: 61-99), and for QFT was 74 % (95 % CI: 47-95). This is one of the first studies to estimate the prevalence of M. tuberculosis infection in children and the parameters of its main diagnostic tests by latent class model. The results suggest that children in contact with an index case have a high risk of infection. The accuracy and the predictive values did not show significant differences according to the test applied. Combined use of the two tests in our study showed scarce improvement in the diagnosis of LTBI.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Budde, Kiran Kumar. „A Matlab Toolbox for fMRI Data Analysis: Detection, Estimation and Brain Connectivity“. Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81314.

Der volle Inhalt der Quelle
Annotation:
Functional Magnetic Resonance Imaging (fMRI) is one of the best techniques for neuroimaging and has revolutionized the way to understand the brain functions. It measures the changes in the blood oxygen level-dependent (BOLD) signal which is related to the neuronal activity. Complexity of the data, presence of different types of noises and the massive amount of data makes the fMRI data analysis a challenging one. It demands efficient signal processing and statistical analysis methods.  The inference of the analysis is used by the physicians, neurologists and researchers for better understanding of the brain functions.      The purpose of this study is to design a toolbox for fMRI data analysis. It includes methods to detect the brain activity maps, estimation of the hemodynamic response (HDR) and the connectivity of the brain structures. This toolbox provides methods for detection of activated brain regions measured with Bayesian estimator. Results are compared with the conventional methods such as t-test, ordinary least squares (OLS) and weighted least squares (WLS). Brain activation and HDR are estimated with linear adaptive model and nonlinear method based on radial basis function (RBF) neural network. Nonlinear autoregressive with exogenous inputs (NARX) neural network is developed to model the dynamics of the fMRI data.  This toolbox also provides methods to brain connectivity such as functional connectivity and effective connectivity.  These methods are examined on simulated and real fMRI datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Motrunich, Anastasiia. „Estimation des paramètres pour les séquences de Markov avec application dans des problèmes médico-économiques“. Thesis, Le Mans, 2015. http://www.theses.fr/2015LEMA1009/document.

Der volle Inhalt der Quelle
Annotation:
Dans la première partie de cette thèse, nous considérons plusieurs problèmes d'estimation de paramètre de dimension finie pour les séquences de Markov dans l'asymptotique des grands échantillons. Le comportement asymptotique des estimateurs bayésiens et les estimateurs obtenus par la méthode des moments sont décrits. Nous montrons que sous les conditions de régularité ces estimateurs sont consistants et asymptotiquement normaux et que l'estimateur bayésien est asymptotiquement efficace. Les estimateur-processus du maximum de vraisemblance un-pas et deux-pas sont étudiés. Ces estimateurs nous permettent de construire des estimateurs asymptotiquement efficaces sur la base de certainsestimateurs préliminaires, par exemple, les estimateurs obtenus par la méthode des moments ou l'estimateur deBayes et la structure de l'estimateur du maximum de vraisemblance un-pas. Nous proposons notamment des processus autorégressifs non linéaires comme exemple et nous illustrons les propriétés de ces estimateurs à l'aide de simulations numériques. Dans la deuxième partie, nous donnons les applications de processus de Markov en économie de la santé. Nous comparons les modèles de Markov homogènes et non-homogènes pour l'analyse coût-efficacité de l'utilisation depansements transparents contenant un gel de gluconate de chlorhexidine par rapport aux pansements transparents standard. Le pansement antimicrobien protège les accès vasculaire centrale et réduit le risque de bactériémies liées aux cathéters. L'impact de l'approche de modélisation sur la décision d'adopter des pansements antimicrobiens pour les patients gravement malades est discuté
In the first part of this dissertation we consider several problems of finite-dimensional parameter estimation for Markov sequences in the asymptotics of large samples. The asymptotic behavior of the Bayesian estimators and the estimators of the method of moments are described. It is shown that under regularity conditions these estimators are consistent and asymptotically normal. We show that the Bayesian estimator is asymptotically efficient. The one-step and two-step maximum likelihood estimator-processes are studied. These estimators allow us to construct the asymptotically efficient estimators based on some preliminary estimators, say, the estimators of the method of moments or Bayes estimator and the one-step maximum likelihood estimator structure. We propose particular non-linear autoregressive processes as examples and we illustrate the properties of these estimators with the help of numerical simulations. In the second part we give theapplications of Markov processes in health economics. We compare homogeneous and non-homogeneous Markov models for cost-effectiveness analysis of routine use of transparent dressings containing a chlorhexidine gluconate gel pad versus standard transparent dressings. The antimicrobial dressing protects central vascular accesses reducing the risk of catheter-related bloodstream infections. The impact of the modeling approach on the decision of adopting antimicrobialdressings for critically-ill patients is discussed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Dortel, Emmanuelle. „Croissance de l'albacore (Thunnus albacares) de l'Océan Indien : de la modélisation statistique à la modélisation bio-énergétique“. Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20035/document.

Der volle Inhalt der Quelle
Annotation:
Depuis le début des années 1960, la croissance de l'albacore fait l'objet d'une attention particulière tant dans le domaine de la recherche que pour la gestion des pêcheries. Dans l'océan Indien, la gestion du stock d'albacores, sous la juridiction le Commission Thonière de l'Océan Indien (CTOI), souffre de nombreuses incertitudes associées à la courbe de croissance actuellement considérée. En particulier, des lacunes subsistent dans notre connaissance des processus biologiques et écologiques élémentaires régulant la croissance. Leur connaissance est pourtant fondamentale pour comprendre la productivité des stocks et leur capacité de résistance à la pression de pêche et aux changements océanographiques en cours. À travers la modélisation, cette étude se propose d'améliorer les connaissances actuelles sur la croissance de la population d'albacore de l'océan Indien et de renforcer ainsi les avis scientifiques sur l'état du stock. Alors que la plupart des études sur la croissance de l'albacore s'appuient sur une seule source de données, nous avons mis en œuvre un modèle hiérarchique Bayésien qui exploite diverses sources d'informations sur la croissance, i.e. des estimations d'âge obtenues par otolithométrie, des analyses de progressions modales et les taux de croissance individuels issus du marquage-recapture, et intègre explicitement des connaissances d'experts et les incertitudes associées à chaque source de données ainsi qu'au processus de modélisation. En particulier, le modèle de croissance a été couplé un à modèle d'erreurs dans les estimations d'âge par otolithométrie apportant une amélioration significative des estimations d'âge et des paramètres de croissance en résultant et permettant une meilleure évaluation de la fiabilité des estimations. Les courbes de croissances obtenues constituent une avancée majeure dans la représentation du patron de croissance actuellement utilisé dans les évaluations de stock d'albacore. Elles démontrent que l'albacore présente une croissance en phases, caractérisée par une forte accélération en fin de phase juvénile. Cependant, elles n'apportent aucune information sur les mécanismes biologiques et écologiques à l'origine de ces phases de croissance. Afin de mieux comprendre les facteurs impliqués dans l'accélération de la croissance, nous avons mis en œuvre un modèle bio-énergétique s'appuyant sur les principes de la théorie des bilans dynamiques d'énergie (DEB). Deux hypothèses apparaissant comme les plus pertinentes ont été testées : (i) une faible disponibilité alimentaire liée à une forte compétition inter et intra-spécifique chez les jeunes albacores formant des bancs et (ii) un changement dans le régime alimentaire des adultes s'accompagnant de la consommation de proies plus énergétiques. Il apparait que ces deux hypothèses sont susceptibles d'expliquer, au moins partiellement, l'accélération de la croissance
Since the early 1960s, the growth of yellowfin has been enjoyed a particular attention both in the research field and for fisheries management. In the Indian Ocean, the management of yellowfin stock, under the jurisdiction of the Indian Ocean Tuna Commission (IOTC), suffers from much uncertainty associated with the growth curve currently considered. In particular, there remain gaps in our knowledge of basic biological and ecological processes regulating growth. Their knowledge is however vital for understanding the stocks productivity and their resilience abilities to fishing pressure and oceanographic changes underway.Through modelling, this study aims to improve current knowledge on the growth of yellowfin population of the Indian Ocean and thus strengthen the scientific advice on the stock status. Whilst most studies on yellowfin growth only rely on one data source, we implemented a hierarchical Bayesian model that exploits various information sources on growth, i.e. direct age estimates obtained through otolith readings, analyzes of modal progressions and individual growth rates derived from mark-recapture experiments, and takes explicitely into account the expert knowledge and the errors associated with each dataset and the growth modelling process. In particular, the growth model was coupled with an ageing error model from repeated otolith readings which significantly improves the age estimates as well as the resulting growth estimates and allows a better assessment of the estimates reliability. The growth curves obtained constitute a major improvement of the growth pattern currently used in the yellowfin stock assessment. They demonstrates that yellowfin exhibits a two-stanzas growth, characterized by a sharp acceleration at the end of juvenile stage. However, they do not provide information on the biological and ecological mechanisms that lie behind the growth acceleration.For a better understanding of factors involved in the acceleration of growth, we implemented a bioenergetic model relying on the principles of Dynamic Energy Budget theory (DEB). Two major assumptions were investigated : (i) a low food availability during juvenile stage in relation with high intra and inter-specific competition and (ii) changes in food diet characterized by the consumption of more energetic prey in older yellowfin. It appears that these two assumption may partially explain the growth acceleration
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Modesto, i. Alapont Vicent. „Muerte en UCIP estimada con el índice “PRISM”: comparación de la exactitud diagnóstica de las predicciones realizadas con un modelo de regresión logística y una red neuronal artificial. Una propuesta bayesiana“. Doctoral thesis, Universidad de Alicante, 2011. http://hdl.handle.net/10045/23578.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Rodriguez, Delphy. „Caractérisation de la pollution urbaine en Île-de-France par une synergie de mesures de surface et de modélisation fine échelle“. Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS341.

Der volle Inhalt der Quelle
Annotation:
L’impact sanitaire lié à la pollution de l’air nécessite une estimation précise de celle-ci. Les réseaux de stations de mesures des agences de surveillance de la qualité de l’air (AIRPARIF en Île-de-France) ne sont pas suffisamment denses pour renseigner sur l’hétérogénéité de la pollution en ville. Et, les modèles haute résolution simulant les champs de concentration de polluants en 3D ont une large couverture spatiale mais sont limités par leurs incertitudes. Ces deux sources d’information exploitées indépendamment ne permettent pas d’évaluer finement l’exposition d’un individu. Nous proposons deux approches pour résoudre ce problème : (1) par la mesure directe des polluants avec des capteurs mobiles à bas coût et des instruments de référence. Des niveaux de pollution très variables ont été constatés entre les microenvironnements et dans une même pièce. Ces capteurs devraient être déployés en grand nombre pour palier à leurs contraintes techniques. Les instruments de référence, très coûteux et volumineux, ne peuvent être utilisés que ponctuellement. (2) en combinant les concentrations simulées par le modèle Parallel Micro-SWIFT-SPRAY (PMSS) à Paris avec une résolution horizontale de 3 mètres et les mesures des stations de surface AIRPARIF. Nous avons déterminé des « zones de représentativité » - zones géographiques où les concentrations sont très proches de celle de la station - uniquement à partir des sorties du modèle PMSS. Ensuite, nous avons développé un modèle bayésien pour propager la mesure des stations dans ces zones
The harmful effects of air pollution need a high-resolution concentration estimate. Ambient pollutant concentrations are routinely measured by surface monitoring sites of local agencies (AIRPARIF in Paris area, France). Such networks are not dense enough to represent the strong horizontal gradients of pollutant concentrations over urban areas. And, high-resolution models that simulate 3D pollutant concentration fields have a large spatial coverage but suffer from uncertainties. Those both information sources exploited independently are not able to accurately assess an individual’s exposure. We suggest two approaches to solve this problem : (1) direct pollution measurement by using low cost mobile sensors and reference instruments. A high variability across pollution levels is shown between microenvironments and also in the same room. Mobile sensors should be deployed on a large scale due to their technical constraints. Reference instruments are very expensive, cumbersome, and can only be used occasionally. (2) by combining concentration fields of the Parallel Micro-SWIFT-SPRAY (PMSS) model over Paris at a horizontal resolution of 3 meters with AIRPARIF local ground stations measurements. We determined “representativeness areas” - perimeter where concentrations are very close to the one of the station location – only from PMSS simulations. Next, we developed a Bayesian model to extend the stations measurements within these areas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie