Dissertations / Theses on the topic 'Bayesian statistical decision theory - Graphic methods'

To see the other types of publications on this topic, follow the link: Bayesian statistical decision theory - Graphic methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bayesian statistical decision theory - Graphic methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

馮榮錦 and Wing-kam Tony Fung. "Analysis of outliers using graphical and quasi-Bayesian methods." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31230842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Armstrong, Helen School of Mathematics UNSW. "Bayesian estimation of decomposable Gaussian graphical models." Awarded by:University of New South Wales. School of Mathematics, 2005. http://handle.unsw.edu.au/1959.4/24295.

Full text
Abstract:
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical models fully accessible to statisticians. To achieve these aims the thesis makes a number of statistical contributions. First, it proposes a new prior for decomposable graphs and a simulation methodology for estimating this prior. Second, it proposes a number of Markov chain Monte Carlo sampling schemes based on graphical techniques. The thesis also presents some new graphical results, and some existing results are reproved to make them more readily understood. Appendix 8.1 contains all the programs written to carry out the inference discussed in the thesis, together with both a summary of the theory on which they are based and a line by line description of how each routine works.
APA, Harvard, Vancouver, ISO, and other styles
3

Fung, Wing-kam Tony. "Analysis of outliers using graphical and quasi-Bayesian methods /." [Hong Kong] : University of Hong Kong, 1987. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1236146X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thaithara, Balan Sreekumar. "Bayesian methods for astrophysical data analysis." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Metcalfe, Leanne N. "Bayesian methods in determining health burdens." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/31809.

Full text
Abstract:
Thesis (Ph.D)--Biomedical Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Vidakovic, Brani; Committee Member: Griffin, Paul; Committee Member: Kemp, Charlie; Committee Member: Sprigle, Stephen; Committee Member: Villivalam, Arun. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

Pei, Xin, and 裴欣. "Bayesian approach to road safety analyses." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46591989.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chiu, Jing-Er. "Applications of bayesian methods to arthritis research /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3036813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chan, Ka Hou. "Bayesian methods for solving linear systems." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lau, Wai Kwong. "Bayesian nonparametric methods for some econometric problems /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ISMT%202005%20LAU.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Larocque, Jean-René. "Advanced bayesian methods for array signal processing /." *McMaster only, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Riggelsen, Carsten. "Approximation methods for efficient learning of Bayesian networks /." Amsterdam ; Washington, DC : IOS Press, 2008. http://www.loc.gov/catdir/toc/fy0804/2007942192.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

林達明 and Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

So, Moon-tong, and 蘇滿堂. "Applications of Bayesian statistical model selection in social scienceresearch." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39312951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

蕭偉成 and Wai-shing Siu. "On a subjective modelling of VaR: fa Bayesianapproach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chinyamakobvu, Mutsa Carole. "Eliciting and combining expert opinion : an overview and comparison of methods." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1017827.

Full text
Abstract:
Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
APA, Harvard, Vancouver, ISO, and other styles
16

Ng, William Reilly James P. "Advances in wideband array signal processing using numerical Bayesian methods /." *McMaster only, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Woo, Bo-kei, and 胡寶琦. "A new hierarchical Bayesian approach to low-field magnetic resonance imaging." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Thomas, Clifford S. "From 'tree' based Bayesian networks to mutual information classifiers : deriving a singly connected network classifier using an information theory based technique." Thesis, University of Stirling, 2005. http://hdl.handle.net/1893/2623.

Full text
Abstract:
For reasoning under uncertainty the Bayesian network has become the representation of choice. However, except where models are considered 'simple' the task of construction and inference are provably NP-hard. For modelling larger 'real' world problems this computational complexity has been addressed by methods that approximate the model. The Naive Bayes classifier, which has strong assumptions of independence among features, is a common approach, whilst the class of trees is another less extreme example. In this thesis we propose the use of an information theory based technique as a mechanism for inference in Singly Connected Networks. We call this a Mutual Information Measure classifier, as it corresponds to the restricted class of trees built from mutual information. We show that the new approach provides for both an efficient and localised method of classification, with performance accuracies comparable with the less restricted general Bayesian networks. To improve the performance of the classifier, we additionally investigate the possibility of expanding the class Markov blanket by use of a Wrapper approach and further show that the performance can be improved by focusing on the class Markov blanket and that the improvement is not at the expense of increased complexity. Finally, the two methods are applied to the task of diagnosing the 'real' world medical domain, Acute Abdominal Pain. Known to be both a different and challenging domain to classify, the objective was to investigate the optiniality claims, in respect of the Naive Bayes classifier, that some researchers have argued, for classifying in this domain. Despite some loss of representation capabilities we show that the Mutual Information Measure classifier can be effectively applied to the domain and also provides a recognisable qualitative structure without violating 'real' world assertions. In respect of its 'selective' variant we further show that the improvement achieves a comparable predictive accuracy to the Naive Bayes classifier and that the Naive Bayes classifier's 'overall' performance is largely due the contribution of the majority group Non-Specific Abdominal Pain, a group of exclusion.
APA, Harvard, Vancouver, ISO, and other styles
19

Yee, Irene Mei Ling. "Local parametric poisson models for fisheries data." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28360.

Full text
Abstract:
Poisson process is a common model for count data. However, a global Poisson model is inadequate for sparse data such as the marked salmon recovery data that have huge extraneous variations and noise. An empirical Bayes model, which enables information to be aggregated to overcome the lack of information from data in individual cells, is thus developed to handle these data. The method fits a local parametric Poisson model to describe the variation at each sampling period and incorporates this approach with a conventional local smoothing technique to remove noise. Finally, the overdispersion relative to the Poisson model is modelled by mixing these locally smoothed, Poisson models in an appropriate way. This method is then applied to the marked salmon data to obtain the overall patterns and the corresponding credibility intervals for the underlying trend in the data.
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
20

Endres, Dominik M. "Bayesian and information-theoretic tools for neuroscience." Thesis, St Andrews, 2006. http://hdl.handle.net/10023/162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Fang, Fang. "A simulation study for Bayesian hierarchical model selection methods." View electronic thesis (PDF), 2009. http://dl.uncw.edu/etd/2009-2/fangf/fangfang.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Negash, Efrem Ocubamicael. "Risk and admissibility for a Weibull class of distributions." Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/50086.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2004.
ENGLISH ABSTRACT: The Bayesian approach to decision-making is considered in this thesis for reliability/survival models pertaining to a Weibull class of distributions. A generalised right censored sampling scheme has been assumed and implemented. The Jeffreys' prior for the inverse mean lifetime and the survival function of the exponential model were derived. The consequent posterior distributions of these two parameters were obtained using this non-informative prior. In addition to the Jeffreys' prior, the natural conjugate prior was considered as a prior for the parameter of the exponential model and the consequent posterior distribution was derived. In many reliability problems, overestimating a certain parameter of interest is more detrimental than underestimating it and hence, the LINEX loss function was used to estimate the parameters and their consequent risk measures. Moreover, the same analogous derivations have been carried out relative to the commonly-used symmetrical squared error loss function. The risk function, the posterior risk and the integrated risk of the estimators were obtained and are regarded in this thesis as the risk measures. The performance of the estimators have been compared relative to these risk measures. For the Jeffreys' prior under the squared error loss function, the comparison resulted in crossing-over risk functions and hence, none of these estimators are completely admissible. However, relative to the LINEX loss function, it was found that a correct Bayesian estimator outperforms an incorrectly chosen alternative. On the other hand for the conjugate prior, crossing-over of the risk functions of the estimators were evident as a result. In comparing the performance of the Bayesian estimators, whenever closed-form expressions of the risk measures do not exist, numerical techniques such as Monte Carlo procedures were used. In similar fashion were the posterior risks and integrated risks used in the performance compansons. The Weibull pdf, with its scale and shape parameter, was also considered as a reliability model. The Jeffreys' prior and the consequent posterior distribution of the scale parameter of the Weibull model have also been derived when the shape parameter is known. In this case, the estimation process of the scale parameter is analogous to the exponential model. For the case when both parameters of the Weibull model are unknown, the Jeffreys' and the reference priors have been derived and the computational difficulty of the posterior analysis has been outlined. The Jeffreys' prior for the survival function of the Weibull model has also been derived, when the shape parameter is known. In all cases, two forms of the scalar estimation error have been t:. used to compare as much risk measures as possible. The performance of the estimators were compared for acceptability in a decision-making framework. This can be seen as a type of procedure that addresses robustness of an estimator relative to a chosen loss function.
AFRIKAANSE OPSOMMING: Die Bayes-benadering tot besluitneming is in hierdie tesis beskou vir betroubaarheids- / oorlewingsmodelle wat behoort tot 'n Weibull klas van verdelings. 'n Veralgemene regs gesensoreerde steekproefnemingsplan is aanvaar en geïmplementeer. Die Jeffreyse prior vir die inverse van die gemiddelde leeftyd en die oorlewingsfunksie is afgelei vir die eksponensiële model. Die gevolglike aposteriori-verdeling van hierdie twee parameters is afgelei, indien hierdie nie-inligtingge-wende apriori gebruik word. Addisioneel tot die Jeffreyse prior, is die natuurlike toegevoegde prior beskou vir die parameter van die eksponensiële model en ooreenstemmende aposteriori-verdeling is afgelei. In baie betroubaarheidsprobleme het die oorberaming van 'n parameter meer ernstige nagevolge as die onderberaming daarvan en omgekeerd en gevolglik is die LINEX verliesfunksie gebruik om die parameters te beraam tesame met ooreenstemmende risiko maatstawwe. Soortgelyke afleidings is gedoen vir hierdie algemene simmetriese kwadratiese verliesfunksie. Die risiko funksie, die aposteriori-risiko en die integreerde risiko van die beramers is verkry en word in hierdie tesis beskou as die risiko maatstawwe. Die gedrag van die beramers is vergelyk relatief tot hierdie risiko maatstawwe. Die vergelyking vir die Jeffreyse prior onder kwadratiese verliesfunksie het op oorkruisbare risiko funksies uitgevloei en gevolglik is geeneen van hierdie beramers volkome toelaatbaar nie. Relatief tot die LINEX verliesfunksie is egter gevind dat die korrekte Bayes-beramer beter vaar as die alternatiewe beramer. Aan die ander kant is gevind dat oorkruisbare risiko funksies van die beramers verkry word vir die toegevoegde apriori-verdeling. Met hierdie gedragsvergelykings van die beramers word numeriese tegnieke toegepas, soos die Monte Carlo prosedures, indien die maatstawwe nie in geslote vorm gevind kan word nie. Op soortgelyke wyse is die aposteriori-risiko en die integreerde risiko's gebruik in die gedragsvergelykings. Die Weibull waarskynlikheidsverdeling, met skaal- en vormingsparameter, is ook beskou as 'n betroubaarheidsmodel. Die Jeffreyse prior en die gevolglike aposteriori-verdeling van die skaalparameter van die Weibull model is afgelei, indien die vormingsparameter bekend is. In hierdie geval is die beramingsproses van die skaalparameter analoog aan die afleidings van die eksponensiële model. Indien beide parameters van die Weibull modelonbekend is, is die Jeffreyse prior en die verwysingsprior afgelei en is daarop gewys wat die berekeningskomplikasies is van 'n aposteriori-analise. Die Jeffreyse prior vir die oorlewingsfunksie van die Weibull model is ook afgelei, indien die vormingsparameter bekend is. In al die gevalle is twee vorms van die skalaar beramingsfoute gebruik in die vergelykings, sodat soveel as moontlik risiko maatstawwe vergelyk kan word. Die gedrag van die beramers is vergelyk vir aanvaarbaarheid binne die besluitnemingsraamwerk. Hierdie kan gesien word as 'n prosedure om die robuustheid van 'n beramer relatief tot 'n gekose verliesfunksie aan te spreek.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Hongshu. "Sampling-based Bayesian latent variable regression methods with applications in process engineering." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189650596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Miyamoto, Kazutoshi Seaman John Weldon. "Bayesian and maximum likelihood methods for some two-segment generalized linear models." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nortje, Willem Daniel. "Comparison of Bayesian learning and conjugate gradient descent training of neural networks." Pretoria : [s.n.], 2001. http://upetd.up.ac.za/thesis/available/etd-11092004-091241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kapur, Loveena. "Investigation of artificial neural networks, alternating conditional expectation, and Bayesian methods for reservoir characterization /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Jie. "Novel Bayesian Methods for Disease Mapping: An Application to Chronic Obstructive Pulmonary Disease." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0501102-110350.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: latent class model; Poisson regression model; Metropolis-Hastings sampler; order restriction; disease mapping. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
28

Hecht, Marie B. "A comparison of Bayesian and classical statistical techniques used to identify hazardous traffic intersections." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276795.

Full text
Abstract:
The accident rate at an intersection is one attribute used to evaluate the hazard associated with the intersection. Two techniques traditionally used to make such evaluations are the rate-quality technique and a technique based on the confidence interval of classical statistics. Both of these techniques label intersections as hazardous if their accident rate is greater than some critical accident rate determined by the technique. An alternative technique is one based on a Bayesian analysis of available accident number and traffic volume data. In contrast to the two classic techniques, the Bayesian technique identifies an intersection as hazardous based on a probabilistic assessment of accident rates. The goal of this thesis is to test and compare the ability of the three techniques to accurately identify traffic intersections known to be hazardous. Test data is generated from an empirical distribution of accident rates. The techniques are then applied to the generated data and compared based on the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
29

Chau, Ka-ki, and 周嘉琪. "Informative drop-out models for longitudinal binary data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2962714X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Swallow, Ben. "Bayesian multi-species modelling of non-negative continuous ecological data with a discrete mass at zero." Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/9626.

Full text
Abstract:
Severe declines in the number of some songbirds over the last 40 years have caused heated debate amongst interested parties. Many factors have been suggested as possible causes for these declines, including an increase in the abundance and distribution of an avian predator, the Eurasian sparrowhawk Accipiter nisus. To test for evidence for a predator effect on the abundance of its prey, we analyse data on 10 species visiting garden bird feeding stations monitored by the British Trust for Ornithology in relation to the abundance of sparrowhawks. We apply Bayesian hierarchical models to data relating to averaged maximum weekly counts from a garden bird monitoring survey. These data are essentially continuous, bounded below by zero, but for many species show a marked spike at zero that many standard distributions would not be able to account for. We use the Tweedie distributions, which for certain areas of parameter space relate to continuous nonnegative distributions with a discrete probability mass at zero, and are hence able to deal with the shape of the empirical distributions of the data. The methods developed in this thesis begin by modelling single prey species independently with an avian predator as a covariate, using MCMC methods to explore parameter and model spaces. This model is then extended to a multiple-prey species model, testing for interactions between species as well as synchrony in their response to environmental factors and unobserved variation. Finally we use a relatively new methodological framework, namely the SPDE approach in the INLA framework, to fit a multi-species spatio-temporal model to the ecological data. The results from the analyses are consistent with the hypothesis that sparrowhawks are suppressing the numbers of some species of birds visiting garden feeding stations. Only the species most susceptible to sparrowhawk predation seem to be affected.
APA, Harvard, Vancouver, ISO, and other styles
31

Heywood, Ben. "Investigations into the use of quantified Bayesian maximum entropy methods to generate improved distribution maps and biomass estimates from fisheries acoustic survey data /." St Andrews, 2008. http://hdl.handle.net/10023/512.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Liang, Yiheng. "Computational Methods for Discovering and Analyzing Causal Relationships in Health Data." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804966/.

Full text
Abstract:
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
APA, Harvard, Vancouver, ISO, and other styles
33

Hoi, Ka In. "Enhancement of efficiency and robustness of Kalman filter based statistical air quality models by using Bayesian approach." Thesis, University of Macau, 2010. http://umaclib3.umac.mo/record=b2488003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Michell, Justin Walter. "A review of generalized linear models for count data with emphasis on current geospatial procedures." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019989.

Full text
Abstract:
Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
APA, Harvard, Vancouver, ISO, and other styles
35

Wheeler, David C. "Diagnostic tools and remedial methods for collinearity in linear regression models with spatially varying coefficients." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155413322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Cave, Vanessa M. "Statistical models for the long-term monitoring of songbird populations : a Bayesian analysis of constant effort sites and ring-recovery data." Thesis, St Andrews, 2010. http://hdl.handle.net/10023/885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Vo, Ba Tuong. "Random finite sets in Multi-object filtering." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2009.0045.

Full text
Abstract:
[Truncated abstract] The multi-object filtering problem is a logical and fundamental generalization of the ubiquitous single-object vector filtering problem. Multi-object filtering essentially concerns the joint detection and estimation of the unknown and time-varying number of objects present, and the dynamic state of each of these objects, given a sequence of observation sets. This problem is intrinsically challenging because, given an observation set, there is no knowledge of which object generated which measurement, if any, and the detected measurements are indistinguishable from false alarms. Multi-object filtering poses significant technical challenges, and is indeed an established area of research, with many applications in both military and commercial realms. The new and emerging approach to multi-object filtering is based on the formal theory of random finite sets, and is a natural, elegant and rigorous framework for the theory of multiobject filtering, originally proposed by Mahler. In contrast to traditional approaches, the random finite set framework is completely free of explicit data associations. The random finite set framework is adopted in this dissertation as the basis for a principled and comprehensive study of multi-object filtering. The premise of this framework is that the collection of object states and measurements at any time are treated namely as random finite sets. A random finite set is simply a finite-set-valued random variable, i.e. a random variable which is random in both the number of elements and the values of the elements themselves. Consequently, formulating the multiobject filtering problem using random finite set models precisely encapsulates the essence of the multi-object filtering problem, and enables the development of principled solutions therein. '...' The performance of the proposed algorithm is demonstrated in simulated scenarios, and shown at least in simulation to dramatically outperform traditional single-object filtering in clutter approaches. The second key contribution is a mathematically principled derivation and practical implementation of a novel algorithm for multi-object Bayesian filtering, based on moment approximations to the posterior density of the random finite set state. The performance of the proposed algorithm is also demonstrated in practical scenarios, and shown to considerably outperform traditional multi-object filtering approaches. The third key contribution is a mathematically principled derivation and practical implementation of a novel algorithm for multi-object Bayesian filtering, based on functional approximations to the posterior density of the random finite set state. The performance of the proposed algorithm is compared with the previous, and shown to appreciably outperform the previous in certain classes of situations. The final key contribution is the definition of a consistent and efficiently computable metric for multi-object performance evaluation. It is shown that the finite set theoretic state space formulation permits a mathematically rigorous and physically intuitive construct for measuring the estimation error of a multi-object filter, in the form of a metric. This metric is used to evaluate and compare the multi-object filtering algorithms developed in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
38

Gilbride, Timothy J. "Models for heterogeneous variable selection." Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083591017.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xii, 138 p.; also includes graphics. Includes abstract and vita. Advisor: Greg M. Allenby, Dept. of Business Admnistration. Includes bibliographical references (p. 134-138).
APA, Harvard, Vancouver, ISO, and other styles
39

Oteniya, Lloyd. "Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique." Thesis, University of Stirling, 2008. http://hdl.handle.net/1893/497.

Full text
Abstract:
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.
APA, Harvard, Vancouver, ISO, and other styles
40

Bernardini, Diego Fernando de 1986. "Inferencia Bayesiana para valores extremos." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307585.

Full text
Abstract:
Orientador: Laura Leticia Ramos Rifo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-15T01:44:09Z (GMT). No. of bitstreams: 1 Bernardini_DiegoFernandode_M.pdf: 1483229 bytes, checksum: ea77acd21778728138eea2f27e59235b (MD5) Previous issue date: 2010
Resumo: Iniciamos o presente trabalho apresentando uma breve introdução a teoria de valores extremos, estudando especialmente o comportamento da variável aleatória que representa o máximo de uma sequência de variáveis aleatórias independentes e identicamente distribuídas. Vemos que o Teorema dos Tipos Extremos (ou Teorema de Fisher-Tippett) constitui uma ferramenta fundamental no que diz respeito ao estudo do comportamento assintóticos destes máximos, permitindo a modelagem de dados que representem uma sequência de observações de máximos de um determinado fenômeno ou processo aleatório, através de uma classe de distribuições conhecida como família de distribuições de Valor Extremo Generalizada (Generalized Extreme Value - GEV). A distribuição Gumbel, associada ao máximo de distribuições como a Normal ou Gama entre outras, é um caso particular desta família. Torna-se interessante, assim, realizar inferência para os parâmetros desta família. Especificamente, a comparação entre os modelos Gumbel e GEV constitui o foco principal deste trabalho. No Capítulo 1 estudamos, no contexto da inferência clássica, o método de estimação por máxima verossimilhança para estes parâmetros e um procedimento de teste de razão de verossimilhanças adequado para testar a hipótese nula que representa o modelo Gumbel contra a hipótese que representa o modelo completo GEV. Prosseguimos, no Capítulo 2, com uma breve revisão em teoria de inferência Bayesiana obtendo inferências para o parâmetro de interesse em termos de sua distribuição a posteriori. Estudamos também a distribuição preditiva para valores futuros. No que diz respeito à comparação de modelos, estudamos inicialmente, neste contexto bayesiano, o fator de Bayes e o fator de Bayes a posteriori. Em seguida estudamos o Full Bayesian Significance Test (FBST), um teste de significância particularmente adequado para testar hipóteses precisas, como a hipótese que caracteriza o modelo Gumbel. Além disso, estudamos outros dois critérios para comparação de modelos, o BIC (Bayesian Information Criterion) e o DIC (Deviance Information Criterion). Estudamos as medidas de evidência especificamente no contexto da comparação entre os modelos Gumbel e GEV, bem como a distribuição preditiva, além dos intervalos de credibilidade e inferência a posteriori para os níveis de retorno associados a tempos de retorno fixos. O Capítulo 1 e parte do Capítulo 2 fornecem os fundamentos teóricos básicos deste trabalho, e estão fortemente baseados em Coles (2001) e O'Hagan (1994). No Capítulo 3 apresentamos o conhecido algoritmo de Metropolis-Hastings para simulação de distribuições de probabilidade e o algoritmo particular utilizado neste trabalho para a obtenção de amostras simuladas da distribuição a posteriori dos parâmetros de interesse. No capítulo seguinte formulamos a modelagem dos dados observados de máximos, apresentando a função de verossimilhança e estabelecendo a distribuição a priori para os parâmetros. Duas aplicações são apresentadas no Capítulo 5. A primeira delas trata das observações dos máximos trimestrais das taxas de desemprego nos Estados Unidos da América, entre o primeiro trimestre de 1994 e o primeiro trimestre de 2009. Na segunda aplicação estudamos os máximos semestrais dos níveis de maré em Newlyn, no sudoeste da Inglaterra, entre 1990 e 2007. Finalmente, uma breve discussão é apresentada no Capítulo 6.
Abstract: We begin this work presenting a brief introduction to the extreme value theory, specifically studying the behavior of the random variable which represents the maximum of a sequence of independent and identically distributed random variables. We see that the Extremal Types Theorem (or Fisher-Tippett Theorem) is a fundamental tool in the study of the asymptotic behavior of those maxima, allowing the modeling of data which represent a sequence of maxima observations of a given phenomenon or random process, through a class of distributions known as Generalized Extreme Value (GEV) family. We are interested in making inference about the parameters of this family. Specifically, the comparison between the Gumbel and GEV models constitute the main focus of this work. In Chapter 1 we study, in the context of classical inference, the method of maximum likelihood estimation for these parameters and likelihood ratio test procedure suitable for testing the null hypothesis associated to the Gumbel model against the hypothesis that represents the complete GEV model. We proceed, in Chapter 2, with a brief review on Bayesian inference theory. We also studied the predictive distribution for future values. With respect to the comparison of models, we initially study the Bayes factor and the posterior Bayes factor, in the Bayesian context. Next we study the Full Bayesian Significance Test (FBST), a significance test particularly suitable to test precise hypotheses, such as the hypothesis characterizing the Gumbel model. Furthermore, we study two other criteria for comparing models, the BIC (Bayesian Information Criterion) and the DIC (Deviance Information Criterion). We study the evidence measures specifically in the context of the comparison between the Gumbel and GEV models, as well as the predictive distribution, beyond the credible intervals and posterior inference to the return levels associated with fixed return periods. Chapter 1 and part of Chapter 2 provide the basic theoretical foundations of this work, and are strongly based on Coles (2001) and O'Hagan (1994). In Chapter 3 we present the well-known Metropolis-Hastings algorithm for simulation of probability distributions, and the particular algorithm used in this work to obtain simulated samples from the posterior distribution for the parameters of interest. In the next chapter we formulate the modeling of the observed data of maximum, presenting the likelihood function and setting the prior distribution for the parameters. Two applications are presented in Chapter 5. The first one deals with observations of the quarterly maximum for unemployment rates in the United States of America, between the first quarter of 1994 and first quarter of 2009. In the second application we studied the semiannual maximum of sea levels at Newlyn, in southwest of England, between 1990 and 2007. Finally, a brief discussion is presented in Chapter 6.
Mestrado
Estatistica
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
41

Barette, Tammy S. "A Bayesian approach to the estimation of adult skeletal age assessing the facility of multifactorial and three-dimensional methods to improve accuracy of age estimation /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180543680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Qianqiu. "Bayesian inference on dynamics of individual and population hepatotoxicity via state space models." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1124297874.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiv, 155 p.; also includes graphics (some col.). Includes bibliographical references (p. 147-155). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
43

Koullias, Stefanos. "Methodology for global optimization of computationally expensive design problems." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49085.

Full text
Abstract:
The design of unconventional aircraft requires early use of high-fidelity physics-based tools to search the unfamiliar design space for optimum designs. Current methods for incorporating high-fidelity tools into early design phases for the purpose of reducing uncertainty are inadequate due to the severely restricted budgets that are common in early design as well as the unfamiliar design space of advanced aircraft. This motivates the need for a robust and efficient global optimization algorithm. This research presents a novel surrogate model-based global optimization algorithm to efficiently search challenging design spaces for optimum designs. The algorithm searches the design space by constructing a fully Bayesian Gaussian process model through a set of observations and then using the model to make new observations in promising areas where the global minimum is likely to occur. The algorithm is incorporated into a methodology that reduces failed cases, infeasible designs, and provides large reductions in the objective function values of design problems. Results on four sets of algebraic test problems are presented and the methodology is applied to an airfoil section design problem and a conceptual aircraft design problem. The method is shown to solve more nonlinearly constrained algebraic test problems than state-of-the-art algorithms and obtains the largest reduction in the takeoff gross weight of a notional 70-passenger regional jet versus competing design methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Gouveia, Paulo Sergio da Silva. "Resolução do problema de alinhamento estrutural entre proteínas via técnicas de otimização global." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305964.

Full text
Abstract:
Orientadores: Ana Friedlander de Martinez Perez, Roberto Andreani
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-17T18:28:36Z (GMT). No. of bitstreams: 1 Gouveia_PauloSergiodaSilva_D.pdf: 2266379 bytes, checksum: 85bb53a412744c3d168ac6fed4b701e0 (MD5) Previous issue date: 2011
Resumo: A comparação estrutural entre proteínas é um problema fundamental na Biologia Molecular, pois estruturas similares entre proteínas, frequentemente refletem uma funcionalidade ou origem em comum entre as mesmas. No Problema de Alinhamento Estrutural entre Proteínas, buscamos encontrar o melhor alinhamento estrutural entre duas proteínas, ou seja, a melhor sobreposição entre duas estruturas proteicas, uma vez que alinhamentos locais podem levar a conclusões distorcidas sobre as características c funcionalidades das proteínas em estudo. A maioria dos métodos atuais para abordar este problema ou tem um custo computacional muito elevado ou não tem nenhuma garantia de convergência para o melhor alinhamento entre duas proteínas. Neste trabalho, propomos métodos computacionais para o Problema de Alinhamento Estrutural entre Proteínas que tenham boas garantias de encontrar o melhor alinhamento, mas em um tempo computacional razoável, utilizando as mais variadas técnicas de Otimização Global. A análise sobre os desempenhos de cada método tanto em termos quantitativos quanto qualitativos, além de um gráfico de Pareto, são apresentados de forma a facilitar a comparação entre os métodos com respeito à qualidade da solução e ao tempo computacional
Abstract: The structural comparison of proteins is a fundamental problem in Molecular Biology because similar structures often reflect a comrnon origin or funcionality. In the Protein Alignment problem onc seeks the best structural alignment between two proteins, i.e. the best overlap between two protein structures. Merely local alignments can lead to distorted conclusions on the problem features and functions. Most methods addressing this problem have a very high computational cost or are not supported with guarantecs of convergence to the best alignment. In this work we des-cribe computational methods for Protein Structural Alignment with good certificatea of optimality and reasonable computational execution time. We employ several Global Op-timization techniques. The performance is visualized by means of profile graphics and Pareto curves in order to take into account simultaneously emeiency and robustness of the methods
Doutorado
Otimização
Doutor em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
45

Vicini, Lorena. "Modelos de processo de Poisson não-homogêneo na presença de um ou mais pontos de mudança, aplicados a dados de poluição do ar." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/305867.

Full text
Abstract:
Orientadores: Luiz Koodi Hotta, Jorge Alberto Achcar
Tese (doutorado) ¿ Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-20T14:22:26Z (GMT). No. of bitstreams: 1 Vicini_Lorena_D.pdf: 75122511 bytes, checksum: 796c27170036587b321bbe88bc0d369e (MD5) Previous issue date: 2012
Resumo: A poluição do ar é um problema que tem afetado várias regiões ao redor do mundo. Em grandes centros urbanos, como é esperado, a concentração de poluição do ar é maior. Devido ao efeito do vento, no entanto, este problema não se restringe a esses centros, e consequentemente a poluição do ar se espalha para outras regiões. Os dados de poluição do ar são modelados por processos de Poisson não-homogêneos (NHPP) em três artigos: dois usando métodos Bayesianos com Markov Chain Monte Carlo (MCMC) para dados de contagem, e um usando análise de dados funcionais. O primeiro artigo discute o problema da especificação das distribuições a priori, incluindo a discussão de sensibilidade e convergência das cadeias MCMC. O segundo artigo introduz um modelo incluindo pontos de mudança para NHPP com a função taxa modelada por uma distribuição gama generalizada, usando métodos Bayesianos. Modelos com e sem pontos de mudança foram considerados para fins de comparação. O terceiro artigo utiliza análise de dados funcionais para estimar a função taxa de um NHPP. Esta estimação é feita sob a suposição de que a função taxa é contínua, mas com um número finito de pontos de descontinuidade na sua primeira derivada, localizados exatamente nos pontos de mudança. A função taxa e seus pontos de mudança foram estimadas utilizando suavização splines e uma função de penalização baseada nos candidatos a pontos de mudança. Os métodos desenvolvidos neste trabalho foram testadas através de simulações e aplicados a dados de poluição de ozônio da Cidade do México, descrevendo a qualidade do ar neste centro urbano. Ele conta quantas vezes, em um determinado período, a poluição do ar excede um limiar especificado de qualidade do ar, com base em níveis de concentração de ozônio. Observou-se que quanto mais complexos os modelos, incluindo os pontos de mudança, melhor foi o ajuste
Abstract: Air pollution is a problem that is currently affecting several regions around the world. In major urban centers, as expected, the concentration of air pollution is higher. Due to wind effect, however, this problem does not remain constrained in such centers, and air pollution spreads to other regions. In the thesis the air pollution data is modeled by Non-Homogeneous Poisson Process (NHPP) in three papers: two using Bayesian methods with Markov Chain Monte Carlo (MCMC) for count data, and one using functional data analysis. Paper one discuss the problem of the prior specification, including discussion of the sensitivity and convergence of the MCMC chains. Paper two introduces a model including change point for NHPP with rate function modeled by a generalized gamma distribution, using Bayesian methods. Models with and without change points were considered for comparison purposes. Paper three uses functional data analysis to estimate the rate function of a NHPP. This estimation is done under the assumption that the rate function is continuous, but with a finite number of discontinuity points in its first derivative, exactly at the change-points. The rate function and its change-points were estimated using splines smoothing and a penalty function based on candidate change points. The methods developed in this work were tested using simulations and applied to ozone pollution data from Mexico City, describing the air quality in this urban center. It counts how many times, in a determined period, air pollution exceeds a specified threshold of air quality, based on ozone concentration levels. It was observed that the more complex the models, including change-points, the better the fitting
Doutorado
Estatistica
Doutor em Estatística
APA, Harvard, Vancouver, ISO, and other styles
46

Kao, Ling-Jing. "Data augmentation for latent variables in marketing." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155653751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES
Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.
Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
48

"Optimal Bayesian estimators for image segmentation and surface reconstruction." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/2879.

Full text
Abstract:
J.L. Marroquin.
"May, 1985."
Bibliography: p. 16.
"Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract N00014-80-C-0505" "The author was supported by the Army Research Office under contract ARO-DAAG29-84-K-0005."
APA, Harvard, Vancouver, ISO, and other styles
49

Cheng, Edward K. "Selected Legal Applications for Bayesian Methods." Thesis, 2018. https://doi.org/10.7916/D8H71Z8N.

Full text
Abstract:
This dissertation offers three contexts in which Bayesian methods can address tricky problems in the legal system. Chapter 1 offers a method for attacking case publication bias, the possibility that certain legal outcomes may be more likely to be published or observed than others. It builds on ideas from multiple systems estimation (MSE), a technique traditionally used for estimating hidden populations, to detect and correct case publication bias. Chapter 2 proposes new methods for dividing attorneys' fees in complex litigation involving multiple firms. It investigates optimization and statistical approaches that use peer reports of each firm's relative contribution to estimate a "fair" or consensus division of the fees. The methods proposed have lower informational requirements than previous work and appear to be robust to collusive behavior by the firms. Chapter 3 introduces a statistical method for classifying legal cases by doctrinal area or subject matter. It proposes using a latent space approach based on case citations as an alternative to the traditional manual coding of cases, reducing subjectivity, arbitrariness, and confirmation bias in the classification process.
APA, Harvard, Vancouver, ISO, and other styles
50

"Some Bayesian methods for analyzing mixtures of normal distributions." 2003. http://library.cuhk.edu.hk/record=b6073536.

Full text
Abstract:
Juesheng Fu.
"April 2003."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (p. 124-132).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography