Thèses sur le sujet « Bayesian Sample size »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Bayesian Sample size.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 22 meilleures thèses pour votre recherche sur le sujet « Bayesian Sample size ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Cámara, Hagen Luis Tomás. « A consensus based Bayesian sample size criterion ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ64329.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Cheng, Dunlei Stamey James D. « Topics in Bayesian sample size determination and Bayesian model selection ». Waco, Tex. : Baylor University, 2007. http://hdl.handle.net/2104/5039.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Islam, A. F. M. Saiful. « Loss functions, utility functions and Bayesian sample size determination ». Thesis, Queen Mary, University of London, 2011. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1259.

Texte intégral
Résumé :
This thesis consists of two parts. The purpose of the first part of the research is to obtain Bayesian sample size determination (SSD) using loss or utility function with a linear cost function. A number of researchers have studied the Bayesian SSD problem. One group has considered utility (loss) functions and cost functions in the SSD problem and others not. Among the former most of the SSD problems are based on a symmetrical squared error (SE) loss function. On the other hand, in a situation when underestimation is more serious than overestimation or vice-versa, then an asymmetric loss function should be used. For such a loss function how many observations do we need to take to estimate the parameter under study? We consider different types of asymmetric loss functions and a linear cost function for sample size determination. For the purposes of comparison, firstly we discuss the SSD for a symmetric squared error loss function. Then we consider the SSD under different types of asymmetric loss functions found in the literature. We also introduce a new bounded asymmetric loss function and obtain SSD under this loss function. In addition, to estimate a parameter following a particular model, we present some theoretical results for the optimum SSD problem under a particular choice of loss function. We also develop computer programs to obtain the optimum SSD where the analytic results are not possible. In the two parameter exponential family it is difficult to estimate the parameters when both are unknown. The aim of the second part is to obtain an optimum decision for the two parameter exponential family under the two parameter conjugate utility function. In this case we discuss Lindley’s (1976) optimum decision for one 6 parameter exponential family under the conjugate utility function for the one parameter exponential family and then extend the results to the two parameter exponential family. We propose a two parameter conjugate utility function and then lay out the approximation procedure to make decisions on the two parameters. We also offer a few examples, normal distribution, trinomial distribution and inverse Gaussian distribution and provide the optimum decisions on both parameters of these distributions under the two parameter conjugate utility function.
Styles APA, Harvard, Vancouver, ISO, etc.
4

M'lan, Cyr Emile. « Bayesian sample size calculations for cohort and case-control studies ». Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82923.

Texte intégral
Résumé :
Sample size determination is one of the most important statistical issues in the early stages of any investigation that anticipates statistical analyses.
In this thesis, we examine Bayesian sample size determination methodology for interval estimation. Four major epidemiological study designs, cohort, case-control, cross-sectional and matched pair are the focus. We study three Bayesian sample size criteria: the average length criterion (ALC), the average coverage criterion ( ACC) and the worst outcome criterion (WOC ) as well as various extensions of these criteria. In addition, a simple cost function is included as part of our sample size calculations for cohort and case-controls studies. We also examine the important design issue of the choice of the optimal ratio of controls per case in case-control settings or non-exposed to exposed in cohort settings.
The main difficulties with Bayesian sample size calculation problems are often at the computational level. Thus, this thesis is concerned, to a considerable extent, with presenting sample size methods that are computationally efficient.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Banton, Dwaine Stephen. « A BAYESIAN DECISION THEORETIC APPROACH TO FIXED SAMPLE SIZE DETERMINATION AND BLINDED SAMPLE SIZE RE-ESTIMATION FOR HYPOTHESIS TESTING ». Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/369007.

Texte intégral
Résumé :
Statistics
Ph.D.
This thesis considers two related problems that has application in the field of experimental design for clinical trials: • fixed sample size determination for parallel arm, double-blind survival data analysis to test the hypothesis of no difference in survival functions, and • blinded sample size re-estimation for the same. For the first problem of fixed sample size determination, a method is developed generally for testing of hypothesis, then applied particularly to survival analysis; for the second problem of blinded sample size re-estimation, a method is developed specifically for survival analysis. In both problems, the exponential survival model is assumed. The approach we propose for sample size determination is Bayesian decision theoretical, using explicitly a loss function and a prior distribution. The loss function used is the intrinsic discrepancy loss function introduced by Bernardo and Rueda (2002), and further expounded upon in Bernardo (2011). We use a conjugate prior, and investigate the sensitivity of the calculated sample sizes to specification of the hyper-parameters. For the second problem of blinded sample size re-estimation, we use prior predictive distributions to facilitate calculation of the interim test statistic in a blinded manner while controlling the Type I error. The determination of the test statistic in a blinded manner continues to be nettling problem for researchers. The first problem is typical of traditional experimental designs, while the second problem extends into the realm of adaptive designs. To the best of our knowledge, the approaches we suggest for both problems have never been done hitherto, and extend the current research on both topics. The advantages of our approach, as far as we see it, are unity and coherence of statistical procedures, systematic and methodical incorporation of prior knowledge, and ease of calculation and interpretation.
Temple University--Theses
Styles APA, Harvard, Vancouver, ISO, etc.
6

Tan, Say Beng. « Bayesian decision theoretic methods for clinical trials ». Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312988.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Safaie, Nasser. « A fully Bayesian approach to sample size determination for verifying process improvement ». Diss., Wichita State University, 2010. http://hdl.handle.net/10057/3656.

Texte intégral
Résumé :
There has been significant growth in the development and application of Bayesian methods in industry. The Bayes’ theorem describes the process of learning from experience and shows how knowledge about the state of nature is continually modified as new data become available. This research is an effort to introduce the Bayesian approach as an effective tool for evaluating process adjustments aimed at causing a change in a process parameter. This is usually encountered in scenarios where the process is found to be stable but operating away from the desired level. In these scenarios, a number of changes are proposed and tested as part of the improvement efforts. Typically, it is desired to evaluate the effect of these changes as soon as possible and take appropriate actions. Despite considerable research efforts to utilize the Bayesian approach, there are few guidelines for loss computation and sample size determination. This research proposed a fully Bayesian approach for determining the maximum economic number of measurements required to evaluate and verify such efforts. Mathematical models were derived and used to establish implementation boundaries from economic and technical viewpoints. In addition, numerical examples were used to illustrate the steps involved and highlight the economic advantages of the proposed procedures.
Thesis (Ph.D.)--Wichita State University, College of Engineering, Dept. of Industrial and Manufacturing Engineering
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kaouache, Mohammed. « Bayesian modeling of continuous diagnostic test data : sample size and Polya trees ». Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=107833.

Texte intégral
Résumé :
Parametric models such as the bi-normal have been widely used to analyse datafrom imperfect continuous diagnostic tests. Such models rely on assumptions thatmay often be unrealistic and/or unveri_able, and in such cases nonparametric modelspresent an attractive alternative. Further, even when normality holds, researcherstend to underestimate the sample size required to accurately estimate disease preva-lence from bi-normal models when densities from diseased and non-diseased subjectsoverlap. In this thesis we investigate both of these problems. First, we study theuse of nonparametric Polya tree models to analyze continuous diagnostic test data.Since we do not assume a gold standard test is available, our model includes a latentclass component, the latent data being the unknown true disease status for each sub-ject. Second, we develop methods for the sample size determination when designingstudies with continuous diagnostic tests. Finally, we show how Bayes factors can beused to compare the _t of Polya tree models to parametric bi-normal models. Bothsimulations and a real data illustration are included.
Les modèles paramétriques tel que le modèle binormal ont été largement utilisés pour analyser les données provenant de tests de diagnostic continus et non parfaits. De tels modèles reposent sur des suppositions souvent non réalistes et/ou non verifiables, et dans de tels cas les modèles nonparamétriques représentent une alternative attrayante. De plus, même quand la supposition de normalité est rencontrée les chercheurs ont tendence à sous-estimer la taille d'échantillon requise pour estimer avec exactitude la prédominance d'une maladie à partir de ces modèles bi-normaux quand les densités associées aux sujets malades se chevauchent avec celles associées aux sujets non malades. D'abord, nous étudions l'utilisation de modèles nonparametriques d'arbres de Polya pour analyser les données provenant de tests de diagnostic continus. Puisque nous ne supposons pas l'existance d'un test étalon d'or, notre modèle contient une composante de classe latente, les données latentes étant le vrai état de maladie de chaque sujet. Ensuite nous développons des méthodes pourla determination de la taille d'échantillon quand on planifie des études avec des tests de diagnostic continus. Finalement, nous montrons comment les facteurs de Bayes peuvent être utilisés pour comparer la qualité d'ajustement de modèles d'arbres de Polya à celles de modèles paramétriques binormaux. Des simulations ansi que des données réelles sont incluses.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ma, Junheng. « Contributions to Numerical Formal Concept Analysis, Bayesian Predictive Inference and Sample Size Determination ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1285341426.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kikuchi, Takashi. « A Bayesian cost-benefit approach to sample size determination and evaluation in clinical trials ». Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:f5cb4e27-8d4c-4a80-b792-469e50efeea2.

Texte intégral
Résumé :
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the costs of the trial and the possible benefits by using a new medical treatment, and fail to consider the important fact that the number of users depends on evidence for improvement compared with the current treatment. A novel Bayesian approach, Behavioral Bayes (or BeBay for short) (Gittins and Pezeshk, 2000a,b, 2002a,b; Pezeshk, 2003), assumes that the number of patients switching to the new treatment depends on the strength of the evidence which is provided by clinical trials, and takes a value between zero and the number of potential patients in the country. The better a new treatment, the more patients switch to it and the more the resulting benefit. The model defines the optimal sample size to be the sample size that maximises the expected net benefit resulting from a clinical trial. Gittins and Pezeshk use a simple form of benefit function for paired comparisons between two medical treatments and assume that the variance of the efficacy is known. The research in this thesis generalises these original conditions by introducing a logistic benefit function to take account of differences in efficacy and safety between two drugs. The model is also extended to the more general cases of unpaired comparisons and unknown variance. The expected net benefit defined by Gittins and Pezeshk is based on the efficacy of the new drug only. It does not consider the incidence of adverse reactions and their effect on patients’ preferences. Here we include the costs of treating adverse reactions and calculate the total benefit in terms of how much the new drug can reduce societal expenditure. We describe how our model may be used for the design of phase III clinical trials, cluster randomised clinical trials and bridging studies. This is done in some detail and using illustrative examples based on published studies. For phase III trials we allow the possibility of unequal treatment group sizes, which often occur in practice. Bridging studies are those carried out to extend the range of applicability of an established drug, for example to new ethnic groups. Throughout the objective of our procedures is to optimise the costbenefit in terms of national health-care. BeBay is the leading methodology for determining sample sizes on this basis. It explicitly takes account of the roles of three decision makers, namely patients and doctors, pharmaceutical companies and the health authority.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wood, Scott William. « Differential item functioning procedures for polytomous items when examinee sample sizes are small ». Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1110.

Texte intégral
Résumé :
As part of test score validity, differential item functioning (DIF) is a quantitative characteristic used to evaluate potential item bias. In applications where a small number of examinees take a test, statistical power of DIF detection methods may be affected. Researchers have proposed modifications to DIF detection methods to account for small focal group examinee sizes for the case when items are dichotomously scored. These methods, however, have not been applied to polytomously scored items. Simulated polytomous item response strings were used to study the Type I error rates and statistical power of three popular DIF detection methods (Mantel test/Cox's β, Liu-Agresti statistic, HW3) and three modifications proposed for contingency tables (empirical Bayesian, randomization, log-linear smoothing). The simulation considered two small sample size conditions, the case with 40 reference group and 40 focal group examinees and the case with 400 reference group and 40 focal group examinees. In order to compare statistical power rates, it was necessary to calculate the Type I error rates for the DIF detection methods and their modifications. Under most simulation conditions, the unmodified, randomization-based, and log-linear smoothing-based Mantel and Liu-Agresti tests yielded Type I error rates around 5%. The HW3 statistic was found to yield higher Type I error rates than expected for the 40 reference group examinees case, rendering power calculations for these cases meaningless. Results from the simulation suggested that the unmodified Mantel and Liu-Agresti tests yielded the highest statistical power rates for the pervasive-constant and pervasive-convergent patterns of DIF, as compared to other DIF method alternatives. Power rates improved by several percentage points if log-linear smoothing methods were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. Power rates did not improve if Bayesian methods or randomization tests were applied to the contingency tables prior to using the Mantel or Liu-Agresti tests. ANOVA tests showed that statistical power was higher when 400 reference examinees were used versus 40 reference examinees, when impact was present among examinees versus when impact was not present, and when the studied item was excluded from the anchor test versus when the studied item was included in the anchor test. Statistical power rates were generally too low to merit practical use of these methods in isolation, at least under the conditions of this study.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Kothawade, Manish. « A Bayesian Method for Planning Reliability Demonstration Tests for Multi-Component Systems ». Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1416154538.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Domrow, Nathan Craig. « Design, maintenance and methodology for analysing longitudinal social surveys, including applications ». Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16518/1/Nathan_Domrow_Thesis.pdf.

Texte intégral
Résumé :
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Domrow, Nathan Craig. « Design, maintenance and methodology for analysing longitudinal social surveys, including applications ». Queensland University of Technology, 2007. http://eprints.qut.edu.au/16518/.

Texte intégral
Résumé :
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Assareh, Hassan. « Bayesian hierarchical models in statistical quality control methods to improve healthcare in hospitals ». Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/53342/1/Hassan_Assareh_Thesis.pdf.

Texte intégral
Résumé :
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Xu, Zhiqing. « Bayesian Inference of a Finite Population under Selection Bias ». Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/621.

Texte intégral
Résumé :
Length-biased sampling method gives the samples from a weighted distribution. With the underlying distribution of the population, one can estimate the attributes of the population by converting the weighted samples. In this thesis, generalized gamma distribution is considered as the underlying distribution of the population and the inference of the weighted distribution is made. Both the models with known and unknown finite population size are considered. In the modes with known finite population size, maximum likelihood estimation and bootstrapping methods are attempted to derive the distributions of the parameters and population mean. For the sake of comparison, both the models with and without the selection bias are built. The computer simulation results show the model with selection bias gives better prediction for the population mean. In the model with unknown finite population size, the distributions of the population size as well as the sample complements are derived. Bayesian analysis is performed using numerical methods. Both the Gibbs sampler and random sampling method are employed to generate the parameters from their joint posterior distribution. The fitness of the size-biased samples are checked by utilizing conditional predictive ordinate.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Vong, Camille. « Model-Based Optimization of Clinical Trial Designs ». Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Texte intégral
Résumé :
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Azzolina, Danila. « Bayesian HPD-based sample size determination using semi-parametric prior elicitation ». Doctoral thesis, 2019. http://hdl.handle.net/2158/1152426.

Texte intégral
Résumé :
In several clinical trial settings, it is challenging to recruit the overall sample provided at the design stage. Several factors (i.e., high costs, regulatory barriers, narrow eligibility criteria and cultural attitudes towards research) can impact on the recruitment process. The poor accrual problem is evident in the clinical research involving adults but also in the pediatric research, but also in pediatric research, where 37% of clinical trials terminate early due to inadequate accrual. From a methodological-statistical standpoint, reduced sample size and the rarity of some diseases under consideration reduce a study’s statistical power, compromising the ability to accurately answer the primary research question due to a reduction in the likelihood to detect a treatment effect. This statistical point of view favors the use of a Bayesian approach to the analysis of clinical trial data. In recent years, Bayesian methods have increasingly been used in the design, monitoring, and analysis of clinical trials due to their flexibility. In clinical trials candidate for early termination for poor accrual reasons, a Bayesian approach can incorporate the available knowledge provided by literature (objective prior) or by elicitation of Experts’ opinions (subjective prior) on the treatment effect under investigation in order to reduce uncertainty in treatment effect estimation. The first article (Chapter 1) shows the potentiality of the Bayesian method for use in pediatric research, demonstrating the possibility to include, in the final inference, prior information and trial data, especially when the small sample size is available to estimate the treatment effect. Moreover, this study aims to underline the importance of a sensitivity analysis conducted on prior definitions in order to investigate the stability of inferential conclusions concerning the different prior choices. In a research setting where objective data to derive prior distribution are not available, an informative inference complemented with an expert elicitation procedure can be used to translate into prior probability distribution (elicitation) the available expert knowledge about treatment effect. The elicitation process in the Bayesian inference can quantify the presence of uncertainty in treatment effect belief. Additionally, this information can be used to plan a study design, e.g., the sample size calculations] and interim analysis. Elicitation may be conducted in a parametric setting, assuming that experts’ opinion may be represented by a good note family of probability distributions identified by hyper-parameters, or in a not parametric and semiparametric hybrid setting. It is widely assessed that the primary limit of a parametric approach is to constrain expert belief into a pre-specified family distribution. The second article (Chapter 2) aims to investigate the state-of-art of the Bayesian prior elicitation methods in clinical trial research performing an in-depth analysis of the discrepancy between the approaches available in the statistical literature and the elicitation procedures currently applied within the clinical trial research. A Bayesian approach to clinical trial data may be defined before the start of the study, by the protocol, defining a sample size taking into account of expert opinion providing the possibility to use also nonparametric approaches. A more flexible sample size method may be suitable, for example, to design a study conducted on small sample sizes as a Phase II clinical trial, which is generally one sample, single stage in which accrued patients, are treated, and are then observed for a possible response. Generally, Bayesian methods, available in the literature to obtain a sample size estimation for binary data, are based on parametric Beta-binomial solutions, considering an inference performed in term of posterior Highest Posterior Density interval (HPD). The aim of the third article (Chapter 3) is to extend the main criteria adopted for the Bayesian Sample size estimation, Average Coverage Criterion (ACC), Average Length Criterion (ALC) and Worse Outcome Criterion (WOC), proposing a sample size estimation method which includes also prior defined in a semiparametric approach to the prior elicitation of the expert’s opinion. In the research article also a practical application of the method to a Phase II clinical trial study design has been reported. The semiparametric solution adopted is a very flexible considering a prior distribution obtained as a balanced optimization of a weighed sum two components; one is a linear combination of B-Spline adapted among expert’s quantiles, another one is an uninformative prior distribution.
Styles APA, Harvard, Vancouver, ISO, etc.
19

GUBBIOTTI, STEFANIA. « Bayesian Methods for Sample Size Determination and their use in Clinical Trials ». Doctoral thesis, 2009. http://hdl.handle.net/11573/918542.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Huang, Chung-Chi, et 黃政基. « Development of Genetic Network Reconstruction Algorithm for Small Sample Size Based on Bayesian Networks ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/84926669256581677006.

Texte intégral
Résumé :
碩士
國立臺灣大學
醫學工程學研究所
94
With the continual progress of human genome researches, more and more genes have been found to be closely related to human diseases. Accordingly, exploration of genetic functions has become one of the major foci in biotechnology researches. It is well known that each gene does not work alone. Instead, it may involve enormous complicated interactions among genes in a biological process. Because of the complexity of physiological and biochemical processes in the human body, the relations between the genes and most diseases are not clear currently. Therefore, the ultimate goal of gene network reconstruction is to analyze the regulatory mechanisms among genes and understand how genes involve in biological processes. Limited by the high cost of microarrays, most biological experiments can not offer a large number of observations for gene network reconstruction. To overcome this limitation, a new gene network reconstruction algorithm, called Divide-and-conquer Variational Bayesian (DCVB) algorithm, is proposed in this study. Although the VB algorithm, which is the basic construct of DCVB, has been shown to be effective for long time-course data, its performance for short time-course data is far from satisfactory. The DCVB algorithm decomposes the large gene networks into multiple small subnets. By considering those genes not included in a subnet as latent factors, the DCVB algorithm is capable of estimating gene-gene interactions for each subnet independently, thanks to the ability of the VB algorithm in incorporating latent factors. Two classes of DCVB algorithms will be evaluated, namely, single-level and hierarchical DCVB. While the former decomposes the entire network into small subnets of fixed sizes for reconstruction, the latter integrates the results of multiple levels, each with a different network size, to form the final reconstructed network. Because DCVB does not estimate all gene-gene interactions for the entire network at a time, the number of parameters to be estimated is greatly reduced compared to the conventional VB algorithm. It thus promises a better performance for reconstructing a large network with short time-course data than the VB algorithm. Performance comparison between the DCVB and VB is carried out by using simulated time-course data and p53R2 experimental data. For the simulated data, three gene networks with various lengths of time-course data are simulated. According to the simulation results, the proposed DCVB outperforms the VB for both short and long time-course data. Especially, the DCVB is substantially superior to the VB for large networks and long time-course data. For the data of p53R2 study, it requires further experiments to validate the networks reconstructed by the DCVB and the VB, respectively. In summary, the DCVB is shown to be better than the VB only for the simulation data. Further validations are required for the performance comparison between both algorithms for the real data.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Wen, Yu-Ju, et 溫鈺如. « A Study on Optimal Sample Size for Destructive Inspection under Bayesian Sequential Sampling Plan ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/09774408609859024071.

Texte intégral
Résumé :
碩士
國立屏東科技大學
工業管理系
94
In this paper, we focus our attention on sample size for destructive inspection, and consider inspection cost and cost of sampling error, we applied Bayesian estimation model to derive the posterior pdf of P. We formulated a mathematical model for expected total losses. Applying computerized numerical analysis method, we can find out the optimal sample size that minimize the total losses. Furthermore, we use the concept of sequential sampling, the decision-maker can draw a sample , and inspect it in each sequential observation, and to determine whether to stop sampling and then making decision or not, to construct the decision chart of sequential sampling. We develop a numerical example to illustrate the meaning of this research. Furthermore, analyzing and comparing with sampling plan of ABC-STD 105, in order to test and verify whether this study is a applicative policy-making plan. Finally, thirteen conclusions are drawn for future studies and applications.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Huang, Qindan. « Adaptive Reliability Analysis of Reinforced Concrete Bridges Using Nondestructive Testing ». Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7920.

Texte intégral
Résumé :
There has been increasing interest in evaluating the performance of existing reinforced concrete (RC) bridges just after natural disasters or man-made events especially when the defects are invisible, or in quantifying the improvement after rehabilitations. In order to obtain an accurate assessment of the reliability of a RC bridge, it is critical to incorporate information about its current structural properties, which reflects the possible aging and deterioration. This dissertation proposes to develop an adaptive reliability analysis of RC bridges incorporating the damage detection information obtained from nondestructive testing (NDT). In this study, seismic fragility is used to describe the reliability of a structure withstanding future seismic demand. It is defined as the conditional probability that a seismic demand quantity attains or exceeds a specified capacity level for given values of earthquake intensity. The dissertation first develops a probabilistic capacity model for RC columns and the capacity model can be used when the flexural stiffness decays nonuniformly over a column height. Then, a general methodology to construct probabilistic seismic demand models for RC highway bridges with one single-column bent is presented. Next, a combination of global and local NDT methods is proposed to identify in-place structural properties. The global NDT uses the dynamic responses of a structure to assess its global/equivalent structural properties and detect potential damage locations. The local NDT uses local measurements to identify the local characteristics of the structure. Measurement and modeling errors are considered in the application of the NDT methods and the analysis of the NDT data. Then, the information obtained from NDT is used in the probabilistic capacity and demand models to estimate the seismic fragility of the bridge. As an illustration, the proposed probabilistic framework is applied to a reinforced concrete bridge with a one-column bent. The result of the illustration shows that the proposed framework can successfully provide the up-to-date structural properties and accurate fragility estimates.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie