Rozprawy doktorskie na temat „Bayes's theorem”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Bayes's theorem.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Bayes's theorem”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Portugal, Agnaldo Cuoco. "Theism, Bayes's theorem and religious experience : an examination of Richard Swinburnes's religious epistemology". Thesis, King's College London (University of London), 2003. https://kclpure.kcl.ac.uk/portal/en/theses/theism-bayess-theorem-and-religious-experience--an-examination-of-richard-swinburness-religious-epistemology(f6ab0fd9-9277-41d7-9997-ecad803c54ae).html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Rogers, David M. "Using Bayes' theorem for free energy calculations". Cincinnati, Ohio : University of Cincinnati, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc_num=ucin1251832030.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of Cincinnati, 2009.
Advisor: Thomas L. Beck. Title from electronic thesis title page (viewed Jan. 21, 2010). Keywords: Bayes; probability; statistical mechanics; free energy. Includes abstract. Includes bibliographical references.
Style APA, Harvard, Vancouver, ISO itp.
3

Jones, Martin K. "Bayes' Theorem and positive confirmation : an experimental economic analysis". Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300072.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Fletcher, Douglas. "Generalized Empirical Bayes: Theory, Methodology, and Applications". Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/546485.

Pełny tekst źródła
Streszczenie:
Statistics
Ph.D.
The two key issues of modern Bayesian statistics are: (i) establishing a principled approach for \textit{distilling} a statistical prior distribution that is \textit{consistent} with the given data from an initial believable scientific prior; and (ii) development of a \textit{consolidated} Bayes-frequentist data analysis workflow that is more effective than either of the two separately. In this thesis, we propose generalized empirical Bayes as a new framework for exploring these fundamental questions along with a wide range of applications spanning fields as diverse as clinical trials, metrology, insurance, medicine, and ecology. Our research marks a significant step towards bridging the ``gap'' between Bayesian and frequentist schools of thought that has plagued statisticians for over 250 years. Chapters 1 and 2---based on \cite{mukhopadhyay2018generalized}---introduces the core theory and methods of our proposed generalized empirical Bayes (gEB) framework that solves a long-standing puzzle of modern Bayes, originally posed by Herbert Robbins (1980). One of the main contributions of this research is to introduce and study a new class of nonparametric priors ${\rm DS}(G, m)$ that allows exploratory Bayesian modeling. However, at a practical level, major practical advantages of our proposal are: (i) computational ease (it does not require Markov chain Monte Carlo (MCMC), variational methods, or any other sophisticated computational techniques); (ii) simplicity and interpretability of the underlying theoretical framework which is general enough to include almost all commonly encountered models; and (iii) easy integration with mainframe Bayesian analysis that makes it readily applicable to a wide range of problems. Connections with other Bayesian cultures are also presented in the chapter. Chapter 3 deals with the topic of measurement uncertainty from a new angle by introducing the foundation of nonparametric meta-analysis. We have applied the proposed methodology to real data examples from astronomy, physics, and medical disciplines. Chapter 4 discusses some further extensions and application of our theory to distributed big data modeling and the missing species problem. The dissertation concludes by highlighting two important areas of future work: a full Bayesian implementation workflow and potential applications in cybersecurity.
Temple University--Theses
Style APA, Harvard, Vancouver, ISO itp.
5

Conlon, Erin Marie. "Estimation and flexible correlation structures in spatial hierarchical models of disease mapping /". Diss., ON-CAMPUS Access For University of Minnesota, Twin Cities Click on "Connect to Digital Dissertations", 1999. http://www.lib.umn.edu/articles/proquest.phtml.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chadwick, Thomas Jonathan. "A general Bayes theory of nested model comparisons". Thesis, University of Newcastle Upon Tyne, 2002. http://hdl.handle.net/10443/641.

Pełny tekst źródła
Streszczenie:
We propose a general Bayes analysis for nested model comparisons which does not suffer from Lindley's paradox. It does not use Bayes factors, but uses the posterior distribution of the likelihood ratio between the models evaluated at the true values of the nuisance parameters. This is obtained directly from the posterior distribution of the full model parameters. The analysis requires only conventional uninformative or flat priors, and prior odds on the models. The conclusions from the posterior distribution of the likelihood ratio are in general in conflict with Bayes factor conclusions, but are in agreement with frequentist likelihood ratio test conclusions. Bayes factor conclusions and those from the BIC are, even in simple cases, in conflict with conclusions from HPD intervals for the same parameters, and appear untenable in general. Examples of the new analysis are given, with comparisons to classical P-values and Bayes factors.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhang, Shunpu. "Some contributions to empirical Bayes theory and functional estimation". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23100.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yang, Ying. "Discretization for Naive-Bayes learning". Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9393.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Liu, Ka-yee. "Bayes and empirical Bayes estimation for the panel threshold autoregressive model and non-Gaussian time series". Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30706166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Liu, Ka-yee, i 廖家怡. "Bayes and empirical Bayes estimation for the panel threshold autoregressive model and non-Gaussian time series". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30706166.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Farrell, Patrick John. "Empirical Bayes estimation of small area proportions". Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70301.

Pełny tekst źródła
Streszczenie:
Due to the nature of survey design, the estimation of parameters associated with small areas is extremely problematic. In this study, techniques for the estimation of small area proportions are proposed and implemented. More specifically, empirical Bayes estimation methodologies, where random effects which reflect the complex structure of a multi-stage sample design are incorporated into logistic regression models, are derived and studied.
The proposed techniques are applied to data from the 1950 United States Census to predict local labor force participation rates of females. Results are compared with those obtained using unbiased and synthetic estimation approaches.
Using the proposed methodologies, a sensitivity analysis concerning the prior distribution assumption, conducted with a view toward outlier detection, is performed. The use of bootstrap techniques to correct measures of uncertainty is also studied.
Style APA, Harvard, Vancouver, ISO itp.
12

Salge, Christoph. "Information theoretic models of social interaction". Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/13887.

Pełny tekst źródła
Streszczenie:
This dissertation demonstrates, in a non-semantic information-theoretic framework, how the principles of 'maximisation of relevant information' and 'information parsimony' can guide the adaptation of an agent towards agent-agent interaction. Central to this thesis is the concept of digested information; I argue that an agent is intrinsically motivated to a.) process the relevant information in its environment and b.) display this information in its own actions. From the perspective of similar agents, who require similar information, this differentiates other agents from the rest of the environment, by virtue of the information they provide. This provides an informational incentive to observe other agents and integrate their information into one's own decision making process. This process is formalized in the framework of information theory, which allows for a quantitative treatment of the resulting effects, specifically how the digested information of an agent is influenced by several factors, such as the agent's performance and the integrated information of other agents. Two specific phenomena based on information maximisation arise in this thesis. One is flocking behaviour similar to boids that results when agents are searching for a location in a girdworld and integrated the information in other agent's actions via Bayes' Theorem. The other is an effect where integrating information from too many agents becomes detrimental to an agent's performance, for which several explanations are provided.
Style APA, Harvard, Vancouver, ISO itp.
13

França, Paulo dos Santos. "Comunidades Epistêmicas Artificiais: o papel da confiança na comunidade científica". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100132/tde-19112017-205840/.

Pełny tekst źródła
Streszczenie:
O estudo de sistemas complexos nos ajuda a entender como regras locais simples podem gerar padrões agregados complexos e muitas vezes inesperados. Quando as regras são bem definidas e os padrões observáveis, o sistema pode ser modelado e seus resultados comparados. Um dos maiores desafios para a modelagem de sistemas complexos é definir a regra de interação responsável pelo comportamento complexo. Em dinâmica de opiniões, padrão complexo e inesperado pode ser o súbito consenso ou até mesmo a polarização, e objetivo, então, se torna verificar em que circunstâncias podemos observar pessoas concordarem ou descordarem. Embora haja uma série de modelos de dinâmica de opiniões para descrever como as pessoas interagem, cada um define a regra de formação da opinião de forma ad hoc. O modelo CODA (Continuous opinions and Discret Actions) propõe uma fundamentação teórica para os modelos de dinâmica de opiniões baseada em teoria de probabilidade. Suas aplicações se estendem desde estudos sobre inovação à epistemologia. Nesta dissertação, aprofundamos os estudos de epistemologia que envolvem o CODA, investigando principalmente o efeito da confiança no processo de confirmação cientifica. Nossas simulações corroboram investigações sociológicas e históricas sobre o papel fundamental da confiança no processo de aquisição e geração do conhecimento
The study of complex systems helps us understand how simple local rules can generate complex and often unexpected aggregate patterns. When the rules are well defined and patterns observed, the system can be modeled and its results compared. One of the major challenges for modeling complex systems is to define a rule of interaction responsible for complex behavior. In opinion dynamics, complex and unexpected pattern may be the sudden consensus or even a polarization, so the aim it is to verify under what circumstances we can observe agreement or disagreement. Although there are a number of models of opinion dynamics to describe how people should interact with each other, each one defines an ad hoc opinion formation rule. The model of opinion dynamics CODA (Continuous Opinions and Discret Actions) proposes a theoretical framework for the models of opinion dynamics, based on probability theory. Their applications range from studies on innovation to epistemology. In this dissertation, we deepen the studies of epistemology that involve the CODA, investigating mainly the effect of the trust in the process of scientific confirmation. Our simulations corroborates sociological and historical researches on the role of trust in the process of acquisition and generation of knowledge
Style APA, Harvard, Vancouver, ISO itp.
14

Desai, Manisha. "Mixture models for genetic changes in cancer cells /". Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9566.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Weller, Jennifer N. "Bayesian Inference In Forecasting Volcanic Hazards: An Example From Armenia". [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000485.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Ardia, David. "Financial risk management with Bayesian estimation of GARCH models theory and applications". Berlin Heidelberg Springer, 2008. http://d-nb.info/987538780/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Wang, Kai. "Novel computational methods for accurate quantitative and qualitative protein function prediction /". Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/11488.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Thach, Chau Thuy. "Self-designing optimal group sequential clinical trials /". Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9585.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Wang, Jiabin. "Variational Bayes inference based segmentation algorithms for brain PET-CT images". Thesis, The University of Sydney, 2012. https://hdl.handle.net/2123/29251.

Pełny tekst źródła
Streszczenie:
Dual modality PET-CT imaging can provide aligned anatomical (CT) and functional (PET) images in a single scanning session, and has nowadays steadily replaced single modality PET imaging in clinical practice. The enormous number of PET-CT images produced in hospitals are currently analysed almost entirely through visual inspection on a slice-by-slice basis, which requires a high degree of skill and concentration, and is time-consuming, expensive, prone to operator bias, and unsuitable for the processing large-scale studies. Computer-aided diagnosis, where image segmentation is an essential step, would enable doctors and researchers to bypass these issues. However, most medical image segmentation methods are designed for single modality images. In this thesis, the automated segmentation of dual-modality brain PET-CT images has been comprehensively investigated by using variational learning techniques. Two novel statistical segmentation algorithms, namely the DE-VEM algorithm and PA-VEM algorithm, have been proposed to delineate brain PET-CT images into grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF). In statistical image segmentation, voxel values are usually characterised by probabilistic models, whose parameters can be estimated by using the maximum likelihood estimation, and the optimal segmentation result is regarded as the one that maximises the posterior probability. Despite of their simplicity, statistical approaches intrinsically suffer from overfitting and local convergence. In variational Bayes inference, statistical model parameters are further assumed to be random variables to improve the model's flexibility. Instead of directly estimating the posterior probability, variational learning techniques use a variational distribution to approximate the posterior probability, and thus are able to overcome the drawback of overfitting. The most widely used variational learning technique is the variational expectation maximisation (VEM) algorithm. As a natural extension of the traditional expectation maximisation (EM) algorithm, the VEM algorithm is also a two-step iterative process and still faces the risk of being trapped in a local maximum and the difficulty of incorporating prior knowledge. Inspired by the fact that global optimisation techniques, such as the genetic algorithm, have been successfully applied to replace the EM algorithm in the maximum-likelihood estimation of probabilistic models, this research combines the differential evolution (DE) algorithm and VEM algorithm to solve the optimisation problem involved in the variational Bayes inference, and thus proposes the DE-VEM algorithm for brain PET -CT image segmentation. In this algorithm, the DE scheme is introduced to search a global solution and the VEM scheme is employed to perform a local search. Since DE is population-based global optimisation technique and has proven itself in a variety of applications with good, the DE­YEM algorithm has the potential to avoid local convergence. The proposed algorithm has been compared with the YEM algorithm and the segmentation function in the statistical parametric mapping (SPM, Version 2008) package in 21 clinical brain PET -CT images. My results show that the DE-YEM algorithm outperforms the other two algorithms and can produce accurate segmentation of brain PET-CT images. Meanwhile, to incorporate the prior anatomical information into the variational learning based brain image segmentation process, the probabilistic brain atlas is generated and used to guide the search of an optimal segmentation result through performing the YEM iteration. As a result, the probabilistic atlas based YEM (PA-YEM) algorithm is developed to allow each voxel to have an adaptable prior probability of belonging to each class. This algorithm has been compared to the segmentation functions in the SPM8 package and the EMS package, the DE-YEM algorithm, and the DEV algorithm in 21 clinical brain PET-CT images. My results demonstrate that the proposed PA-YEM algorithm can substantially improve the accuracy of segmenting brain PET -CT images. Although this research uses the brain PET -CT images as case studies, the theoretical outcomes are generic and can be extended to the segmentation of other dual-modality medical images. The future work in this area should be focused mainly on improving the computational efficiency of variational learning based image segmentation approaches.
Style APA, Harvard, Vancouver, ISO itp.
20

Charland, Katia. "Evaluation of fully Bayesian disease mapping models in correctly identifying high-risk areas with an application to multiple sclerosis". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103370.

Pełny tekst źródła
Streszczenie:
Disease maps are geographical maps that display local estimates of disease risk. When the disease is rare, crude risk estimates can be highly variable, leading to extreme estimates in areas with low population density. Bayesian hierarchical models are commonly used to stabilize the disease map, making them more easily interpretable. By exploiting assumptions about the correlation structure in space and time, the statistical model stabilizes the map by shrinking unstable, extreme risk estimates to the risks in surrounding areas (local spatial smoothing) or to the risks at contiguous time points (temporal smoothing). Extreme estimates that are based on smaller populations are subject to a greater degree of shrinkage, particularly when the risks in adjacent areas or at contiguous time points do not support the extreme value and are more stable themselves.
A common goal in disease mapping studies is to identify areas of elevated risk. The objective of this thesis is to compare the accuracy of several fully Bayesian hierarchical models in discriminating between high-risk and background-risk areas. These models differ according to the various spatial, temporal and space-time interaction terms that are included in the model, which can greatly affect the smoothing of the risk estimates. This was accomplished with simulations based on the cervical cancer rate of Kentucky and at-risk person-years of the state of Kentucky's 120 counties from 1995 to 2002. High-risk areas were 'planted' in the generated maps that otherwise had background relative risks of one. The various disease mapping models were applied and their accuracy in correctly identifying high- and background-risk areas was compared by means of Receiver Operating Characteristic curve methodology. Using data on Multiple Sclerosis (MS) on the island of Sardinia, Italy we apply the more successful models to identify areas of elevated MS risk.
Style APA, Harvard, Vancouver, ISO itp.
21

Damian, Doris. "A Bayesian approach to estimating heterogeneous spatial covariances /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/9563.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Bäurle, Gregor. "Connecting macroeconomic theory to the data methods and applications". Berlin dissertation.de, 2008. http://d-nb.info/999377655/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Sheppard, Sarah E. "Application of a Naïve Bayes Classifier to Assign Polyadenylation Sites from 3' End Deep Sequencing Data: A Dissertation". eScholarship@UMMS, 2013. http://escholarship.umassmed.edu/gsbs_diss/653.

Pełny tekst źródła
Streszczenie:
Cleavage and polyadenylation of a precursor mRNA is important for transcription termination, mRNA stability, and regulation of gene expression. This process is directed by a multitude of protein factors and cis elements in the pre-mRNA sequence surrounding the cleavage and polyadenylation site. Importantly, the location of the cleavage and polyadenylation site helps define the 3’ untranslated region of a transcript, which is important for regulation by microRNAs and RNA binding proteins. Additionally, these sites have generally been poorly annotated. To identify 3’ ends, many techniques utilize an oligo-dT primer to construct deep sequencing libraries. However, this approach can lead to identification of artifactual polyadenylation sites due to internal priming in homopolymeric stretches of adenines. Previously, simple heuristic filters relying on the number of adenines in the genomic sequence downstream of a putative polyadenylation site have been used to remove these sites of internal priming. However, these simple filters may not remove all sites of internal priming and may also exclude true polyadenylation sites. Therefore, I developed a naïve Bayes classifier to identify putative sites from oligo-dT primed 3’ end deep sequencing as true or false/internally primed. Notably, this algorithm uses a combination of sequence elements to distinguish between true and false sites. Finally, the resulting algorithm is highly accurate in multiple model systems and facilitates identification of novel polyadenylation sites.
Style APA, Harvard, Vancouver, ISO itp.
24

Tang, Adelina Lai Toh. "Application of the tree augmented naive Bayes network to classification and forecasting /". [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Hubbard, Rebecca Allana. "Modeling a non-homogeneous Markov process via time transformation /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9607.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Spilker, Mary Elizabeth. "A Bayesian approach to parametric image analysis /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/8107.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Eriksson, Viktor. "Bayesian Model Selection with Intrinsic Bayes Factor for Location-Scale Model and Random Effects Model". Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-85152.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Campos, Marília Silveira de Almeida. "Comparação da eficácia e tolerabilidade dos fármacos antiepilépticos : revisão sistemática com meta-análises". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/158714.

Pełny tekst źródła
Streszczenie:
OBJETIVO: Comparar a eficácia e a tolerabilidade dos fármacos antiepiléticos (FAE) no tratamento em monoterapia de pacientes com epilepsia focal ou generalizada. MÉTODOS: Uma revisão sistemática foi realizada por meio da busca nas bases de dados eletrônicas Pubmed, Scopus, Web of Science e Cochrane Register of Controlled Trials. Foram incluídos os ensaios clínicos controlados com pacientes com epilepsia, em tratamento com FAE, via oral, em monoterapia, e que avaliaram o número de pacientes que atingiram a remissão das crises epilépticas, que interromperam o tratamento devido à ineficácia terapêutica ou à ocorrência de reações adversas (RAM) intoleráveis. Meta-análises de comparação de múltiplos tratamentos foram realizadas por meio do modelo bayesiano de efeitos randômicos que permitiu o cálculo do Odds Ratio meta-analítico para os FAE estudados. Também foi realizado um ranqueamento da probabilidade de cada FAE ser a melhor opção em eficácia e tolerabilidade. RESULTADOS E CONCLUSÕES: A busca identificou 18874 publicações, no entanto apenas 71 estudos foram selecionados, compreendendo 17555 pacientes com epilepsia. Vinte e nove estudos apresentaram os desfechos de eficácia no tratamento de crises focais, 19 em crises generalizadas e 58 apresentaram dados de tolerabilidade. Nesses estudos, 15 FAE foram avaliados. No tratamento das crises focais, os FAE de nova geração levetiracetam (LEV), lamotrigina (LTG), oxcarbazepina, sultiame e topiramato (TPM) demonstraram possuir eficácia equivalente à carbamazepina (CBZ), clobazam e valproato (VPA). No entanto, a CBZ apresentou o pior perfil de tolerabilidade devido à grande probabilidade do paciente abandonar o tratamento devido à RAM intoleráveis. Quanto ao tratamento das crises generalizadas, a LTG, LEV e TPM são tão eficazes quanto o VPA para o tratamento de crises generalizadas tônico-clônicas, tônicas e clônicas. O VPA e a etosuximida constituem as melhores opções para o tratamento de crises de ausências, enquanto que a LTG mostrou-se menos eficaz. Para o tratamento de crises mioclônicas e espasmos infantis mais ensaios clínicos randomizados são necessários para fornecer boas evidências que possam guiar a decisão clínica dos profissionais de saúde. Dentre os FAE com perfil de eficácia adequado, a LTG destacou-se pela menor probabilidade de manifestar RAM intoleráveis.
OBJECTIVE: To compare the efficacy and tolerability of the antiepileptic drugs (AED) in monotherapy of patients with focal or generalized epilepsy. METHODS: A systematic review was in the Medline/Pubmed, Scopus, Web of Science and Cochrane Register of Controlled Trials databases. We included randomized clinical trials of patients with epilepsy treated with oral monotherapy AED, which evaluated number of patients becoming seizure free at the maintenance treatment period; number of patients which withdrawals from the study because of therapeutic inefficacy and number of patients which withdrawals from the study because of intolerable adverse reaction. Network meta-analyses were performed using Bayesian random effects model. We also carried out a ranking of the probability of each AED be the best option in the efficacy and tolerability outcomes. Sensitivity analyses were conducted in order to check the robustness of the results. RESULTS AND CONCLUSIONS: The research identified 18,874 publications, but only 71 studies were selected, comprising 17555 patients with epilepsy.Twenty-nine trials showed the efficacy outcomes in the treatment of focal seizures, 19 in generalized seizures and 58 showed tolerability data. In the treatment of focal seizures, levetiracetam (LEV), lamotrigine (LTG), oxcarbazepine, sultiame and topiramate (TPM) have demonstrated equivalent efficacy to carbamazepine (CBZ), clobazam and valproate (VPA). LTG, LEV and TPM are as effective as the VPA for the treatment of generalized tonic-clonic, tonic and clonic seizures. VPA and ethosuximide are the best options for the treatment of absence seizures, whereas LTG was less effective. For the treatment of myoclonic seizures and infantile spasms, more randomized clinical trials are needed to provide good evidence to guide the clinical decision of health professionals. Among the AED with adequate efficacy profile, LTG stands out as the AED with the best tolerability profile, suggesting it may be the best option for the treatment of patients with epilepsy.
Style APA, Harvard, Vancouver, ISO itp.
29

Bade, Alexander. "Bayesian portfolio optimization from a static and dynamic perspective /". Münster : Verl.-Haus Monsenstein und Vannerdat, 2009. http://d-nb.info/996985085/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Kaufmann, Olaf. "Immunhistochemisch gestützte Tumordiagnostik unter besonderer Berücksichtigung von Metastasen bei unbekanntem Primärtumor". Doctoral thesis, Humboldt-Universität zu Berlin, Medizinische Fakultät - Universitätsklinikum Charité, 2001. http://dx.doi.org/10.18452/13777.

Pełny tekst źródła
Streszczenie:
Immunhistochemische Zusatzuntersuchungen an Karzinommetastasen mit unbekanntem Primärtumor sind kostengünstig und erlauben insbesondere bei Adenokarzinomen oft eine spezifische Identifizierung des primären Tumorsitzes. Die Auswahl an kommerziell verfügbaren Antikörpern gegen Markerproteine mit gut dokumentierter hoher bis sehr hoher Spezifität für bestimmte Primärtumoren ist jedoch begrenzt. Dazu gehören der Thyreoidale Transkriptionsfaktor-1, Uroplakin III, GCDFP-15, Östrogen- und Progesteronrezeptoren, (-Fetoprotein, der A103-Antikörper gegen MART-1, die Cytokeratine 7 und 20, Basalzell-Cytokeratine, das carcinoembryonale Antigen, CA-125, EMA, Vimentin, HepPar-1, PSA, Thyreoglobulin und das S100-Protein. Die meisten dieser Marker sind jedoch nicht absolut spezifisch, die mit ihnen erzielten Färbeergebnisse müssen daher im Kontext des klinischen und konventionell-histomorphologischen Gesamtbefundes bewertet werden. Je genauer im Rahmen dieses Gesamtbefundes das Spektrum der infrage kommenden Karzinome und ihre relativen a priori Wahrscheinlichkeiten abgeschätzt werden, um so genauer lassen sich auch auf der Grundlage des Bayes-Theorems aus den Färbeergebnisse der Marker diagnostisch relevante Aussagen (prädiktive Werte) gewinnen.
Immunohistochemical studies on metastatic carcinomas of unknown primary site are cost-effective and often allow a specific identification of the tumor origin, especially if the metastases are adenocarcinomas by light microscopy. Commercially available site-specific markers include prostate-specific antigen, thyreoglobulin, thyreoid transcription factor-1, uroplakin III, GCDFP-15, estrogen- and progesterone rezeptors, (-Fetoprotein, the A103 monoclonal antibody against MART-1, cytokeratins 7 and 20, cytokeratins of basal cell type, p63, carcinoembryonic antigen, CA-125, EMA, vimentin, HepPar-1, and S100 protein. However, immunostainings with most of these markers do not show an absolute specificity for a certain primary site. For this reason, histopathologists interpretating staining results with these markers should take into consideration the available clinical data and the histological features of the metastatic carcinoma. These data are necessary to estimate the relative a priori probabilities of possible carcinomas. Based on Bayes` theorem, the a priori probabilities can then be used to calculate the diagnostically relevant predictive values for immunostaining results with the chosen markers.
Style APA, Harvard, Vancouver, ISO itp.
31

Salomão, Marcelo Soares. "Estudo e generalizações do paradoxo de Monty Hall na educação básica". Universidade Federal de Juiz de Fora, 2014. https://repositorio.ufjf.br/jspui/handle/ufjf/789.

Pełny tekst źródła
Streszczenie:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-02-18T14:22:58Z No. of bitstreams: 1 marcelosoaressalomao.pdf: 5275463 bytes, checksum: bd084577d98f83eca225c7117c8ee332 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-02-26T13:31:33Z (GMT) No. of bitstreams: 1 marcelosoaressalomao.pdf: 5275463 bytes, checksum: bd084577d98f83eca225c7117c8ee332 (MD5)
Made available in DSpace on 2016-02-26T13:31:33Z (GMT). No. of bitstreams: 1 marcelosoaressalomao.pdf: 5275463 bytes, checksum: bd084577d98f83eca225c7117c8ee332 (MD5) Previous issue date: 2014-08-16
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Nesse trabalho apresentamos o Paradoxo de Monty Hall e seus aspectos lúdicos como uma oportunidade de cativar o educando para o estudo das probabilidades. Oferecendo assim a oportunidade de trabalhar no universo da educação básica o ensino de probabilidade na sala de aula através de atividades em forma de jogos. Apesar da dificuldade do assunto na Educação Básica, os professores que lerem o trabalho poderão aproveitar algumas dessas ideias e transformá-las em conhecimento aos seus educandos. Primeiramente apresentamos um breve resumo teórico sobre probabilidade, um pouco da história do Monty Hall paradox e algumas soluções formais e experimentais. Para as atividades, há simulações do problema com variantes do números de portas proporcionando desenvolver no aluno as habilidades como: experimentação, abstração, modelagem. Com o intuito de fazer uso de recursos computacionais apresentamos uma sugestão de atividade com a utilização de um software de fácil acesso e manuseio. Isso se sustenta, pois com aumento de cursos tecnológicos e a presença maior da informática no cotidiano do educando é imperativo o emprego dessa interdisciplinaridade.
We present the Monty Hall Paradox and its entertaining aspects as a chance to engage the student to the study of probability. Thus offering the opportunity to work in the world of basic education by teaching probability in the level of classroom through activities in the form of games. Despite of the difficulty of the subject at the Basic Education, it is expected that teachers who read the work will take some of these ideas and turn them into knowledge to their students. First, we present a brief overview of theoretical probability, a little history of the Monty Hall paradox and some formal and experimental solutions. For activities, there are simulations of the problem with varying numbers of ports providing the development of student skills such as experimentation, abstraction, and modeling. In order to make use of computational resources we present a suggested activity with the use of software for easy access and handling. This is sustained, because with increased technological courses and the increased presence of computers in daily life of the student it is imperative the employment of this interdisciplinarity.
Style APA, Harvard, Vancouver, ISO itp.
32

Pawar, Yash. "Bayes Factors for the Proposition of a Common Source of Amphetamine Seizures". Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176416.

Pełny tekst źródła
Streszczenie:
This thesis sets out to address the challenges with the comparison of Amphetamine material in determining whether they originate from the same source or different sources using pairwise ratios of peak areas within each chromatogram of material and then modeling the difference between the ratios for each comparison as a basis for evaluation. The evaluation of an existing method that uses these ratios to determine the sum of significant differences between each comparison of material that is provided is done. The outcome of this evaluation suggests that there the distributions for comparison of samples originating from the same source and the comparison of samples originating from different sources have an overlap leading to uncertainties in conclusions. In this work, the differences between the ratios of peak areas have been modeled using a feature-based approach. Because the feature space is quite large, Discriminant Analysis methods such as Linear Discriminant Analysis (LDA) and Partial least squares Discriminant Analysis (PLS-DA) have been implemented to perform classification by dimensionality reduction. Another popular method that works on the principle of nearest centroid classifier called as Nearest shrunken centroid is also applied that performs classification on shrunken centroids of the features. The results and analysis of all the methods have been performed to obtain the classification results for classes +1 (samples originate from the same source) and  ́1 (samples originate from different sources). Likelihood ratios of each class for each of these methods have also been evaluated using the Empirical Cross-Entropy (ECE) method to determine the robustness of the classifiers. All three models seem to have performed fairly well in terms of classification with LDA being the most robust and reliable with its predictions.
Style APA, Harvard, Vancouver, ISO itp.
33

Billig, Ian A. "Bayesian Analysis of Systematic Theoretical Errors Models". Ohio University Honors Tutorial College / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors155619979679762.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Banerjee, Samprit. "Bayesian genome-wide QTL mapping for multiple traits". Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2009r/banerjee.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Jordaan, Aletta Gertruida. "Empirical Bayes estimation of the extreme value index in an ANOVA setting". Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86216.

Pełny tekst źródła
Streszczenie:
Thesis (MComm)-- Stellenbosch University, 2014.
ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting. An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature. The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered. A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored. With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation. The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation.
AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset. ’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie. Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou. ’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek. ’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.
Style APA, Harvard, Vancouver, ISO itp.
36

Toribio, Sherwin G. "Bayesian Model Checking Strategies for Dichotomous Item Response Theory Models". Bowling Green State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1150425606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Sjöqvist, Hugo. "Classifying Forest Cover type with cartographic variables via the Support Vector Machine, Naive Bayes and Random Forest classifiers". Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-58384.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Eldud, Omer Ahmed Abdelkarim. "Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier". Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019985.

Pełny tekst źródła
Streszczenie:
The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
Style APA, Harvard, Vancouver, ISO itp.
39

Libby, Eric. "Investigations into the design and dissection of genetic networks". Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103265.

Pełny tekst źródła
Streszczenie:
The sequencing of the human genome revealed that the number of genes does not explain why humans are different from other organisms like mice and dogs. Instead, it is how genes interact with each other and the environment that separates us from other organisms. This motivates the study of genetic networks and, consequently, my research. My work delves into the roles that simple genetic networks play in a cell and explores the biotechnological aspects of how to uncover such genes and their interactions in experimental models.
Cells must respond to the extracellular environment to contract, migrate, and live. Cells, however, are subject to stochastic fluctuations in protein concentrations. I investigate how cells make important decisions such as gene transcription based on noisy measurements of the extracellular environment. I propose that genetic networks perform Bayesian inference as a way to consider the probabilistic nature of these measurements and make the best decision. With mathematical models, I show that allosteric repressors and activators can correctly infer the state of the environment despite fluctuating concentrations of molecules. Viewing transcriptional networks as inference modules explains previous experimental data. I also discover that the particular inference problem determines whether repressors or activators are better.
Next, I explore the genetic underpinnings of two canine models of atrial fibrillation: atrial tachypacing and ventricular tachypacing. Using Affymetrix microarrays, I find that the genetic signatures of these two models are significantly different both in magnitude and in class of genes expressed. The ventricular tachypacing model has thousands of transcripts differentially expressed with little overlap between 24 hours and 2 weeks, suggesting independent mechanisms. The atrial tachypacing model demonstrates an adaptation as the number of genes found changed decreases with increasing time to the point that no genes are changed at 6 weeks. I use higher level analysis to find that extracellular matrix components are among the most changed in ventricular tachypacing and that genes like connective tissue growth factor may be responsible.
Finally, I generalize the main problem of microarray analysis into an evaluation problem of choosing between two competing options based on the scores of many independent judges. In this context, I rediscover the voting paradox and compare two different solutions to this problem: the sum rule and the majority rule. I find that the accuracy of a decision depends on the distribution of the judges' scores. Narrow distributions are better solved with a sum rule, while broad distributions prefer a majority rule. This finding motivates a new algorithm for microarray analysis which outperforms popular existing algorithms on a sample data set and the canine data set examined earlier. A cost analysis reveals that the optimal number of judges depends on the ratio of the cost of a wrong decision to the cost of a judge.
Style APA, Harvard, Vancouver, ISO itp.
40

FIGUEREDO, Rosângela da Silva. "Sobre modelos de covariância com erros elípticos: uma abordagem Bayesiana". Universidade Federal de Campina Grande, 2007. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1182.

Pełny tekst źródła
Streszczenie:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-07-16T19:04:20Z No. of bitstreams: 1 ROSÂNGELA DA SILVA FIGUEIREDO - DISSERTAÇÃO PPGMAT 2007..pdf: 463444 bytes, checksum: 917a85b81e55496d6077fbf99966cab0 (MD5)
Made available in DSpace on 2018-07-16T19:04:20Z (GMT). No. of bitstreams: 1 ROSÂNGELA DA SILVA FIGUEIREDO - DISSERTAÇÃO PPGMAT 2007..pdf: 463444 bytes, checksum: 917a85b81e55496d6077fbf99966cab0 (MD5) Previous issue date: 2007-03
Neste trabalho estudamos o Modelo de Covariância com Erro nas Variáveis, onde os erros têm distribuição elípitca, sob uma pespectiva Bayesiana. Para tanto usamos umainformaçãoapriori dotiponãoinformativa,propostaporJeffrey(1961),efazemos inferências sobre os parâmetros do modelo em estudo. Mostramos que, para qualquer modelo de covariância elíptico com erro nas variáveis combinado com a priori do tipo não informativa, conduz às mesmas análises da posteriori correspondente ao modelo de covariância normal com erro nas variáveis.
In this work the Model of Covariance with error in their variables will be studied, where these errors have elliptical distributions, under a Bayesian perspective. In order to accomplish this we will use “a priori” information of the not informative type, as proposed forJeffrey(1961),and we will make inferences on the parameters of the studied model. It will be showed that for any model of covariance with elliptical error in their variables, combined with “a priori” information of the not informative type, the results will lead to the same analyses obtained through the posteriori analyses that correspond to the normal model of covariance with errors in their variables.
Style APA, Harvard, Vancouver, ISO itp.
41

Prado, Rogério Ruscitto do. "Análise espaço-temporal dos casos de aids no Estado de São Paulo - 1990 a 2004". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5137/tde-12092008-133818/.

Pełny tekst źródła
Streszczenie:
Introdução: O Estado de São Paulo, por compreender aproximadamente 40% dos casos de aids notificados no Brasil, oferece situação favorável para análise espaço-temporal, visando melhor compreensão da disseminação do HIV/aids. Objetivo: Avaliar a adequação de um modelo espaço-temporal para análise da dinâmica de disseminação da aids segundo áreas geográficas. Material e método: Foram utilizados os casos de aids notificados ao Sistema de Informação de Agravos de Notificação (SINAN - Ministério da Saúde) nos anos de 1990 a 2004 para pessoas com idade igual ou superior a 15 anos e foram criados os riscos relativos de ter aids segundo sexo para períodos de 3 anos utilizando modelos bayesianos completos supondo disseminação geográfica local e disseminação geográfica global. Resultados: O crescimento da aids no interior do Estado de São Paulo é apresentado claramente pelos modelos ajustados uma vez que entre os 50 municípios com maiores riscos relativos de aids no último período do estudo a maioria é do interior. As taxas estimadas de crescimento da aids para as mulheres foram, em sua maioria, de 200% a 300%, enquanto que para os homens este crescimento foi de 100% a 200%. Conclusão: O modelo bayesiano com disseminação global se mostrou mais adequado para explicação da epidemia de aids no Estado de São Paulo, pois não foi encontrada expansão espacial da aids no Estado, mas sim o crescimento local da doença. Os modelos corroboram os fenômenos de feminização e interiorização descritos à exaustão na literatura, o que indica suas adequações.
Introduction: The State of São Paulo, with approximately 40% of the notified cases of AIDS in Brazil, offers a favorable opportunity for a space-time analysis of this disease, which can provide a better understanding of the dissemination of the HIV/AIDS. Objective: To evaluate the adequacy of on space-time modeling to analyze the dynamics of AIDS dissemination according to geographic areas. Methods: Cases of AIDS reported to the Sistema de Informação de Agravos de Notificação (National Disease Reporting System) (SINAN - Ministry of Health) from 1990 to 2004, for people aged 15 years or older were selected. Relative risks of aids for each sex for periods of 3 years were created using complete bayesians models assuming local and global geographic dissemination. Results: The performed analyzes showed that these models were adequate to explain the AIDS dissemination in the State of São Paulo and clearly showed the processes of growth among females and in small size cities. Among the 50 cities with the largest relative risks of AIDS in the last period of study the majority were in the countryside. In general estimated growth rates of AIDS among females were between 200% and 300% while for males were between 100% and 200%. Conclusion: The bayesian model with global dissemination was more adequate to explain the AIDS epidemic in the State of São Paulo since no spatial spreading was observed but instead a local expansion of the disease. The models were consistent with the processes of growth among females and in small size cities, described in the literature indicating their adequacy.
Style APA, Harvard, Vancouver, ISO itp.
42

Haneuse, Sebastian J. P. A. "Ecological studies using supplemental case-control data /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/9595.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Hudson, Derek Lavell. "Improving Accuracy in Microwave Radiometry via Probability and Inverse Problem Theory". Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3244.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Aboalela, Rania Anwar. "An Assessment of Knowledge by Pedagogical Computation on Cognitive Level mapped Concept Graphs". Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1496941747313396.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Zhou, Chuan. "A Bayesian model for curve clustering with application to gene expression data analysis /". Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/9576.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Bauder, David. "Bayesian Inference for High-Dimensional Data with Applications to Portfolio Theory". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19598.

Pełny tekst źródła
Streszczenie:
Die Gewichte eines Portfolios liegen meist als als Kombination des Produkts der Präzisionsmatrix und des Erwartungswertvektors vor. In der Praxis müssen diese Parameter geschätzt werden, allerdings ist die Beschreibung der damit verbundenen Schätzunsicherheit über eine Verteilung dieses Produktes eine Herausforderung. In dieser Arbeit wird demonstriert, dass ein geeignetes bayesianisches Modell nicht nur zu einer leicht zugänglichen Posteriori-Verteilung führt, sondern auch zu leicht interpretierbaren Beschreibungen des Portfoliorisikos, wie beispielsweise einer Ausfallwahrscheinlichkeit des gesamten Portfolios zu jedem Zeitpunkt. Dazu werden die Parameter mit ihren konjugierten Prioris ausgestatet. Mit Hilfe bekannter Ergebnisse aus der Theorie multivariater Verteilungen ist es möglich, eine stochastische Darstellung für relevante Ausdrücke wie den Portfoliogewichten oder des effizienten Randes zu geben. Diese Darstellungen ermöglichen nicht nur die Bestimmung von Bayes-Schätzern der Parameter, sondern sind auch noch rechentechnisch hoch effizient, da Zufallszahlen nur aus bekannten und leicht zugänglichen Verteilungen gezogen werden. Insbesondere aber werden Markov-Chain-Monte-Carlo Methoden nicht benötigt. Angewendet wird diese Methodik an einem mehrperiodigen Portfoliomodell für eine exponentielle Nutzenfunktion, am Tangentialportfolio, zur Schätzung des effizienten Randes, des globalen Minimum-Varianz-Portfolios wie auch am gesamten Mittelwert-Varianz Ansatzes. Für alle behandelten Portfoliomodelle werden für wichtige Größen stochastische Darstellungen oder Bayes-Schätzer gefunden. Die Praktikabilität und Flexibilität wie auch bestimmte Eigenschaften werden in Anwendungen mit realen Datensätzen oder Simulationen illustriert.
Usually, the weights of portfolio assets are expressed as a comination of the product of the precision matrix and the mean vector. These parameters have to be estimated in practical applications. But it is a challenge to describe the associated estimation risk of this product. It is demonstrated in this thesis, that a suitable Bayesian approach does not only lead to an easily accessible posteriori distribution, but also leads to easily interpretable risk measures. This also includes for example the default probability of the portfolio at all relevant points in time. To approach this task, the parameters are endowed with their conjugate priors. Using results from the theory of multivariate distributions, stochastic representations for the portfolio parameter are derived, for example for the portfolio weights or the efficient frontier. These representations not only allow to derive Bayes estimates of these parameters, but are computationally highly efficient since all th necessary random variables are drawn from well known and easily accessible distributions. Most importantly, Markov-Chain-Monte-Carlo methods are not necessary. These methods are applied to a multi-period portfolio for an exponential utility function, to the tangent portfolio, to estimate the efficient frontier and also to a general mean-variance approach. Stochastic representations and Bayes estimates are derived for all relevant parameters. The practicability and flexibility as well as specific properties are demonstrated using either real data or simulations.
Style APA, Harvard, Vancouver, ISO itp.
47

Silva, Gustavo Miranda da. "Monotonicidade em testes de hipóteses". Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-01102014-114225/.

Pełny tekst źródła
Streszczenie:
A maioria dos textos na literatura de testes de hipóteses trata de critérios de otimalidade para um determinado problema de decisão. No entanto, existem, em menor quantidade, alguns textos sobre os problemas de se realizar testes de hipóteses simultâneos e sobre a concordância lógica de suas soluções ótimas. Algo que se espera de testes de hipóteses simultâneos e que, se uma hipótese H1 implica uma hipótese H0, então é desejável que a rejeição da hipótese H0 necessariamente implique na rejeição da hipótese H1, para uma mesma amostra observada. Essa propriedade é chamada aqui de monotonicidade. A fim de estudar essa propriedade sob um ponto de vista mais geral, neste trabalho é definida a nocão de classe de testes de hipóteses, que estende a funcão de teste para uma sigma-álgebra de possíveis hipóteses nulas, e introduzida uma definição de monotonicidade. Também é mostrado, por meio de alguns exemplos simples, que, para um nível de signicância fixado, a classe de testes Razão de Verossimilhanças Generalizada (RVG) não apresenta monotonicidade, ao contrário de testes formulados sob a perspectiva bayesiana, como o teste de Bayes baseado em probabilidades a posteriori, o teste de Lindley e o FBST. Porém, são verificadas, sob a teoria da decisão, quando possível, quais as condições suficientes para que uma classe de testes de hipóteses tenha monotonicidade.
Most of the texts in the literature of hypothesis testing deal with optimality criteria for a single decision problem. However, there are, to a lesser extent, texts on the problem of simultaneous hypothesis testing and the logical consistency of the optimal solutions of such procedures. For instance, the following property should be observed in simultaneous hypothesis testing: if a hypothesis H implies a hypothesis H0, then, on the basis of the same sample observation, the rejection of the hypothesis H0 necessarily should imply the rejection of the hypothesis H. Here, this property is called monotonicity. To investigate this property under a more general point of view, in this work, it is dened rst the notion of a class of hypothesis testing, which extends the test function to a sigma-eld of possible null hypotheses, and then the concept of monotonicity is introduced properly. It is also shown, through some simple examples, that for a xed signicance level, the class of Generalized Likelihood Ratio tests (GLR) does not meet monotonicity, as opposed to tests developed under the Bayesian perspective, such as Bayes tests based on posterior probabilities, Lindleys tests and Full Bayesian Signicance Tests (FBST). Finally, sucient conditions for a class of hypothesis testing to have monotonicity are determined, when possible, under a decision-theoretic approach.
Style APA, Harvard, Vancouver, ISO itp.
48

Bays, Matthew Jason. "Stochastic Motion Planning for Applications in Subsea Survey and Area Protection". Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26763.

Pełny tekst źródła
Streszczenie:
This dissertation addresses high-level path planning and cooperative control for autonomous vehicles. The objective of our work is to closely and rigorously incorporate classication and detection performance into path planning algorithms, which is not addressed with typical approaches found in literature. We present novel path planning algorithms for two different applications in which autonomous vehicles are tasked with engaging targets within a stochastic environment. In the first application an autonomous underwater vehicle (AUV) must reacquire and identify clusters of discrete underwater objects. Our planning algorithm ensures that mission objectives are met with a desired probability of success. The utility of our approach is verified through field trials. In the second application, a team of vehicles must intercept mobile targets before the targets enter a specified area. We provide a formal framework for solving the second problem by jointly minimizing a cost function utilizing Bayes risk.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
49

Braun, Christelle. "Quantitative Approaches to Information Hiding". Phd thesis, Ecole Polytechnique X, 2010. http://tel.archives-ouvertes.fr/tel-00527367.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Barette, Tammy S. "A Bayesian approach to the estimation of adult skeletal age assessing the facility of multifactorial and three-dimensional methods to improve accuracy of age estimation /". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180543680.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii