Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Bioavailability – Research – Statistical methods.

Dissertationen zum Thema „Bioavailability – Research – Statistical methods“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Bioavailability – Research – Statistical methods" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Corrado, Charles J. „Nonparametric statistical methods in financial market research“. Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184608.

Der volle Inhalt der Quelle
Annotation:
This dissertation presents an exploration of the use of nonparametric statistical methods based on ranks for use in financial market research. Applications to event study methodology and the estimation of security systematic risk are analyzed using a simulation methodology with actual daily security return data. The results indicate that procedures based on ranks are more efficient than normal theory procedures currently in common use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Richaud, de Minzi María Cristina. „New statistical methods for research in personality assessment“. Pontificia Universidad Católica del Perú, 2013. http://repositorio.pucp.edu.pe/index/handle/123456789/99784.

Der volle Inhalt der Quelle
Annotation:
In the present work a review of the new multivariate techniques and why they appear especiallysuited to the personality research is presented. Emerging models of personality  and advances in the measurement  of personality and psychopathology suggest that research in this field has ente­ red a stage of advanced development. The past two decades have shown importan! developments in statistics and measurement. Refinement of multivariate statistics has been especially importan! in personality assessment because of the complexity of relations among personality variables. Multivariate procedures provide the opportunity  to examine the complexity  of these interactions by providing methods of analysis for multiple variables. On the other hand, structural equation modeling and multivariate techniques for analyzing categorical variables have been developed. Multidimensional  scaling and item response theory are the last developments.
En este trabajo se realiza una revisión de las nuevas técnicas estadísticas y de su utilidad para la investigación en personalidad. Los nuevos modelos y los avances en la medición de la personalidad y la psicopatología sugieren que la investigación en este campo y en su evaluación han entrado en un estadio avanzado de desarrollo. En las dos últimas décadas se han producido importantes desarrollos en estadística y medición. El refinamiento de las técnicas de análisis multivariado ha sido fundamental en la evaluación de la personalidad debido a la complejidad de las relaciones entre sus variables. Los procedimientos de análisis multivariado proveen la oportunidad de examinar la complejidad de esas interacciones a través de métodos de análisis para variables múltiples. Por otra parte, se han desarrollado los modelos de ecuaciones estructurales y técnicas multivariadas para analizar variables categóricas. Los últimos desarrollos corresponden al escalamiento multidimensional y a la teoría de la respuesta al ítem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grigas, Paul (Paul Edward). „Methods for convex optimization and statistical learning“. Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106683.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 219-225).
We present several contributions at the interface of first-order methods for convex optimization and problems in statistical machine learning. In the first part of this thesis, we present new results for the Frank-Wolfe method, with a particular focus on: (i) novel computational guarantees that apply for any step-size sequence, (ii) a novel adjustment to the basic algorithm to better account for warm-start information, and (iii) extensions of the computational guarantees that hold in the presence of approximate subproblem and/or gradient computations. In the second part of the thesis, we present a unifying framework for interpreting "greedy" first-order methods -- namely Frank-Wolfe and greedy coordinate descent -- as instantiations of the dual averaging method of Nesterov, and we discuss the implications thereof. In the third part of the thesis, we present an extension of the Frank-Wolfe method that is designed to induce near-optimal low-rank solutions for nuclear norm regularized matrix completion and, for more general problems, induces near-optimal "well-structured" solutions. We establish computational guarantees that trade off efficiency in computing near-optimal solutions with upper bounds on the rank of iterates. We then present extensive computational results that show significant computational advantages over existing related approaches, in terms of delivering low rank and low run-time to compute a target optimality gap. In the fourth part of the thesis, we analyze boosting algorithms in linear regression from the perspective modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression can be viewed as subgradient descent to minimize the maximum absolute correlation between features and residuals. We also propose a slightly modified boosting algorithm that yields an algorithm for the Lasso, and that computes the Lasso path. Our perspective leads to first-ever comprehensive computational guarantees for all of these boosting algorithms, which provide a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm, for any dataset. In the fifth and final part of the thesis, we present several related results in the contexts of boosting algorithms for logistic regression and the AdaBoost algorithm.
by Paul Grigas.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Snell, Kym Iris Erika. „Development and application of statistical methods for prognosis research“. Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/6259/.

Der volle Inhalt der Quelle
Annotation:
A pivotal component of prognosis research is the prediction of future outcome risk. This thesis applies, develops and evaluates novel statistical methods for development and validation of risk prediction (prognostic) models. In the first part, a literature review of published prediction models shows that the Cox model remains the most common approach for developing a model using survival data; however, this avoids modelling the baseline hazard and therefore restricts individualised predictions. Flexible parametric survival models are shown to address this by flexibly modelling the baseline hazard, thereby enabling individualised risk predictions over time. Clinical application reveals discrepant mortality rates for different hip replacement procedures, and identifies common issues when developing models using clinical trial data. In the second part, univariate and multivariate random-effects meta-analyses are proposed to summarise a model’s performance across multiple validation studies. The multivariate approach accounts for correlation in multiple statistics (e.g. C-statistic and calibration slope), and allows joint predictions about expected model performance in applied settings. This allows competing implementation strategies (e.g. regarding baseline hazard choice) to be compared and ranked. A simulation study also provides recommendations for the scales on which to combine performance statistics to best satisfy the between-study normality assumption in random-effects meta-analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ho, Lai Ping. „Application of statistical methods to problems in epidemiological research“. HKBU Institutional Repository, 2003. http://repository.hkbu.edu.hk/etd_ra/454.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kunz, Lauren Margaret. „Statistical Methods for Comparative Effectiveness Research of Medical Devices“. Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:14226082.

Der volle Inhalt der Quelle
Annotation:
A recent focus in health care policy is on comparative effectiveness of treatments--from drugs to behavioral interventions to medical devices. Medical devices bring a unique set of challenges for comparative effectiveness research. In this dissertation, I develop statistical methods for comparative effectiveness estimation and illustrate the methodology in the context of three different medical devices. In chapter 2, I review approaches for causal inference in the context of observational cohort studies, utilizing a potential outcomes framework demonstrated using data for patients undergoing revascularization surgery with radial versus femoral artery access. Propensity score methods; G-computation; augmented inverse probability of treatment weighting; and targeted maximum likelihood estimation are implemented and their causal and statistical assumptions evaluated. In chapter 3, I undertake a theoretical and simulation-based assessment of differential follow-up information per treatment arm on inference in meta-analysis where applied researchers commonly assume similar follow-up duration across treatment groups. When applied to the implantation of cardiovascular resynchronization therapies to examine comparative survival, only 3 of 8 studies report arm-specific follow-up. I derive the bias of the rate ratio for an individual study using the number of deaths and total patients per arm and show that the bias can be large, even for modest violations of the assumption that follow-up is the same in the two arms. Furthermore, when pooling multiple studies with Bayesian methods for random effects meta-analysis, the direction and magnitude of the bias is unpredictable. In chapter 4, I examine the statistical power for designing a study of devices when it is difficult to blind patients and providers, everyone wants the device, and clustering by hospitals where the devices are implanted needs to be taken into account. In these situations, a stepped wedge design (SWD) cluster randomized design may be used to rigorously assess the roll-out of novel devices. I determine the exact asymptotic theoretical power using Romberg integration over cluster random effects to calculate power in a two-treatment, binary outcome SWD. Over a range of design parameters, the exact method is from 9% to 2.4 times more efficient than designs based on the existing method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Vähänikkilä, H. (Hannu). „Statistical methods in dental research, with special reference to time-to-event methods“. Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526207933.

Der volle Inhalt der Quelle
Annotation:
Abstract Statistical methods are an essential part of the published dental research. It is important to evaluate the use of these methods to improve the quality of dental research. In the first part, the aim of this interdisciplinary study is to investigate the development of the use of statistical methods in dental journals, quality of statistical reporting and reporting of statistical techniques and results in dental research papers, with special reference to time-to-event methods. In the second part, the focus is specifically on time-to-event methods, and the aim is to demonstrate the strength of time-to-event methods in collecting detailed data about the development of oral health. The first part of this study is based on an evaluation of dental articles from five dental journals. The second part of the study is based on empirical data from 28 municipal health centres in order to study variations in the survival of tooth health. There were different profiles in the statistical content among the journals. The quality of statistical reporting was quite low in the journals. The use of time-to-event methods has increased from 1996 to 2007 in the evaluated dental journals. However, the benefits of these methods have not been fully adopted in dental research. The current study added new information regarding the status of statistical methods in dental research. Our study also showed that complex time-to-event analysis methods can be utilized even with detailed information on each tooth in large groups of study subjects. Authors of dental articles might apply the results of this study to improve the study protocol/planning as well as the statistical section of their research article
Tiivistelmä Tilastolliset tutkimusmenetelmät ovat olennainen osa hammaslääketieteellistä tutkimusta. Menetelmien käyttöä on tärkeä tutkia, jotta hammaslääketieteen tutkimuksen laatua voitaisiin parantaa. Tämän poikkitieteellisen tutkimuksen ensimmäisessä osassa tavoite on tutkia erilaisten tilastomenetelmien ja tutkimusasetelmien käyttöä, raportoinnin laatua ja tapahtumaan kuluvan ajan analysointimenetelmien käyttöä hammaslääketieteellisissä artikkeleissa. Toisessa osassa osoitetaan analysointimenetelmien vahvuus isojen tutkimusjoukkojen analysoinnissa. Ensimmäisen osan tutkimusaineiston muodostavat viiden hammaslääketieteellisen aikakauslehden artikkelit. Toisen osan tutkimusaineiston muodostivat 28 terveyskeskuksessa eri puolella Suomea hammashoitoa saaneet potilaat. Lehdet erosivat toisistaan tilastomenetelmien käytön ja tulosten esittämisen osalta. Tilastollisen raportoinnin laatu oli lehdissä puutteellinen. Tapahtumaan kuluvan ajan analysointimenetelmien käyttö on lisääntynyt vuosien 1996–2007 aikana. Tapahtumaan kuluvan ajan analysointimenetelmät mittaavat seuranta-ajan tietystä aloituspisteestä määriteltyyn päätepisteeseen. Tämän väitöksen tutkimukset osoittivat, että tapahtumaan kuluvan ajan analysointimenetelmät sopivat hyvin isojen tutkimusjoukkojen analysointiin. Menetelmien hyötyä ei ole kuitenkaan vielä saatu täysin esille hammaslääketieteellisissä julkaisuissa. Tämä tutkimus antoi uutta tietoa tilastollisten tutkimusmenetelmien käytöstä hammaslääketieteellisessä tutkimuksessa. Artikkelien kirjoittajat voivat hyödyntää tämän tutkimuksen tuloksia suunnitellessaan hammaslääketieteellistä tutkimusta
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Elia, Eleni. „Statistical methods in prognostic factor research : application, development and evaluation“. Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7259/.

Der volle Inhalt der Quelle
Annotation:
In patients with a particular disease or health condition, prognostic factors are characteristics (such as age, biomarkers) that are associated with different risks of a future clinical outcome. Research is needed to identify prognostic factors, but current evidence suggests that primary research is of low quality and poorly/selectively reported, which limits subsequent systematic reviews and meta-analysis. This thesis aims to improve prognostic factor research, through the application, development and evaluation of statistical methods to quantify the effect of potential prognostic factors. Firstly, I conduct a new prognostic factor study in pregnant women. The findings suggest that the albumin/creatinine ratio (ACR) is an independent prognostic factor for neonatal and, in particular, maternal composite adverse outcomes; thus ACR may enhance individualised risk prediction and clinical decision-making. Then, a literature review is performed to flag challenges in conducting meta-analysis of prognostic factor studies in the same clinical area. Many issues are identified, especially between-study heterogeneity and potential bias in the thresholds (cut-off points) used to dichotomise continuous factors, and the set of adjustment factors. Subsequent chapters aim to tackle these issues by proposing novel multivariate meta-analysis methods to ‘borrow strength’ across correlated thresholds and/or adjustment factors. These are applied to a variety of examples, and evaluated through simulation, which show how the approach can reduce bias and improve precision of meta-analysis results, compared to traditional univariate methods. In particular, the percentage reduction in the variance is of a similar magnitude to the percentage of data missing at random.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Roloff, Verena Sandra. „Statistical methods for using meta-analysis to plan future research“. Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610859.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chiu, Jing-Er. „Applications of bayesian methods to arthritis research /“. free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3036813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Cefalu, Matthew Steven. „Statistical Methods for Effect Estimation in Biomedical Research: Robustness and Efficiency“. Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:10850.

Der volle Inhalt der Quelle
Annotation:
Practical application of statistics in biomedical research is predicated on the notion that one can readily return valid effect estimates of the health consequences of treatments (exposures) that are being studied. The goal as statisticians should be to provide results that are scientifically useful, to use the available data as efficiently as possible, to avoid unnecessary assumptions, and, if necessary, develop methods that are robust to incorrect assumptions. In this dissertation, I provide methods for effect estimation that meet these goals. I consider three scenarios: (1) clustered binary outcomes; (2) continuous outcomes with a binary treatment; and (3) continuous outcomes with potentially missing continuous exposure. In each of these settings, I discuss the shortfalls of current statistical methods for effect estimation available in the literature and propose new and innovative methods that meet the previously stated goals. The validity of each proposed estimator is theoretically verified using asymptotic arguments, and the finite sample behavior is studied through simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Miettunen, J. (Jouko). „Statistical methods in psychiatric research, with special reference on factor analysis“. Doctoral thesis, University of Oulu, 2004. http://urn.fi/urn:isbn:9514273672.

Der volle Inhalt der Quelle
Annotation:
Abstract This interdisciplinary study describes in the first part the frequency with which various statistical research designs and methods are reported in psychiatric journals, and investigates how the use of these methods affect the visibility of the article in the form of received citations. In the second part focus is specifically on factor analysis, and the study presents two applications of this method. Original research articles (N = 448) from four general psychiatric journals in 1996 were reviewed. The journals were the American Journal of Psychiatry, the Archives of General Psychiatry, the British Journal of Psychiatry and the Nordic Journal of Psychiatry. There were differences in the utilisation of statistical procedures among the journals. The use of statistical methods was not strongly associated with the further utilisation of an article. However, extended description of statistical procedures had a positive effect on the received citations. Factor analysis is a statistical method based on correlations of the variables, which is often used when validity and structure of psychiatric instruments are studied. Exploratory factor analysis is designed to explore underlying latent factors, and in confirmatory factor analysis the aim is to verify the factor structure based on earlier findings in other data sets. Using data from the 31-year follow-up of the Northern Finland 1966 Birth Cohort Study this study aimed to demonstrate the validity and factor structure of scales measuring temperament (Tridimensional Personality Questionnaire, TPQ, and Temperament and Character Inventory, TCI) and alexithymia (20-item Toronto Alexithymia Scale, TAS-20). The results of exploratory factor analysis indicated good performance of the TCI and TPQ, though the results suggested that some developmental work is still needed. Of the two scales, the TCI worked psychometrically better than the TPQ. A confirmatory factor analysis showed that the three-factor model of TAS-20 was in agreement with the Finnish version of the scale. To conclude, future authors of psychiatric journals might apply these results in designing their research to present intelligible and compact analysis combined with a high quality presentation technique. Results of the factor analyses showed that the TPQ, TCI and TAS-20 can be used also in their Finnish versions
Tiivistelmä Tämä poikkitieteellinen tutkimus kuvaa erilaisten tilastotieteellisten menetelmien yleisyyttä ja merkitystä psykiatriassa. Tutkimuksen ensimmäisessä osassa tutkitaan erilaisten tilastomenetelmien ja tutkimusasetelmien osuutta psykiatrisissa artikkeleissa ja lisäksi käytettyjen menetelmien vaikutusta artikkelien saamien viittausten lukumäärään. Tutkimuksen toisessa osassa keskitytään faktorianalyysiin ja esitetään kaksi siihen liittyvää sovellusta. Aineiston muodostavat alkuperäistuloksia esittelevät artikkelit (N = 448) neljästä eri psykiatrian tieteellisestä yleislehdestä vuodelta 1996. Kyseiset lehdet ovat American Journal of Psychiatry, Archives of General Psychiatry, British Journal of Psychiatry ja Nordic Journal of Psychiatry. Lehdet erosivat toisistaan tilastotieteellisten menetelmien käytössä ja tulosten esittämisessä. Tilastotieteellisten menetelmien käytöllä ei ollut suurta vaikutusta artikkelien saamien viittausten lukumäärään, mutta laajalla menetelmien kuvauksella oli positiivinen vaikutus viittausten lukumäärään. Faktorianalyysi on tilastotieteellinen tutkimusmenetelmä, jota käytetään tutkittaessa millaisista osatekijöistä erilaiset monimutkaiset ilmiöt koostuvat. Erityisesti tutkittaessa psykiatristen mittareiden validiteettia ja rakennetta faktorianalyysi on osoittautunut hyödylliseksi. Eksploratiivisessa faktorianalyysissa tarkoituksena on etsiä taustalla olevia piileviä muuttujia ja konfirmatorisessa faktorianalyysissa tarkoitus on vahvistaa aiemmissa tutkimuksissa todettu mittarin faktorirakenne. Tässä tutkimuksessa hyödynnetään aineistoa Pohjois-Suomen vuoden 1966 syntymäkohortin 31 vuoden seurannasta. Aineiston avulla tutkitaan temperamenttia (Tridimensional Personality Questionnaire, TPQ, ja Temperament and Character Inventory, TCI) ja aleksitymiaa (20-item Toronto Alexithymia Scale, TAS-20) tutkivien mittareiden suomenkielisten käännöksien validiteettia ja faktorirakennetta. Eksploratiivisen faktorianalyysin tulokset kertoivat, että TPQ ja TCI toimivat hyvin myös suomenkielellä. Kuitenkin mittareissa on vielä kehittämisen varaa. TCI:n psykometriset ominaisuudet olivat paremmat kuin TPQ:n. Aleksitymiamittarin TAS-20 konfirmatorinen faktorianalyysi osoitti että aiemmin julkaistu kolmen faktorin malli toimi hyvin myös suomalaisella versiolla. Psykiatristen artikkelien kirjoittajat voivat hyödyntää tämän tutkimuksen tuloksia suunnitellessaan psykiatrista tutkimusta suuntaan, jossa selkeä ja tiivis tulosten analysointitapa ja korkealaatuinen tulosten esitystapa korostuu. Faktorianalyysi soveltuu hyvin mittarin validiteetin tutkimiseen. Tutkimus osoitti TPQ-, TCI- ja TAS-20-mittareiden suomenkielisten versioiden validiteetin
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Man, Peter Lau Weilen. „Statistical methods for computing sensitivities and parameter estimates of population balance models“. Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wu, Chi-Hung Evelyn. „Causal analysis of highway crashes : a systematic analysis approach with subjective and statistical methods“. Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/20030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Boyer, Christopher A. (Christopher Andrew). „Statistical methods for forecasting and estimating passenger willingness-to-pay in airline revenue management“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61191.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010.
Page 170 blank. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 167-169).
The emergence of less restricted fare structures in the airline industry reduced the capability of airlines to segment demand through restrictions such as Saturday night minimum stay, advance purchase, non-refundability, and cancellation fees. As a result, new forecasting techniques such as Hybrid Forecasting and optimization methods such as Fare Adjustment were developed to account for passenger willingness-to- pay. This thesis explores statistical methods for estimating sell-up, or the likelihood of a passenger to purchase a higher fare class than they originally intended, based solely on historical booking data available in revenue management databases. Due to the inherent sparseness of sell-up data over the booking period, sell-up estimation is often difficult to perform on a per-market basis. On the other hand, estimating sell-up over an entire airline network creates estimates that are too broad and over-generalized. We apply the K-Means clustering algorithm to cluster markets with similar sell-up estimates in an attempt to address this problem, creating a middle ground between system-wide and per-market sell-up estimation. This thesis also formally introduces a new regression-based forecasting method known as Rational Choice. Rational Choice Forecasting creates passenger type categories based on potential willingness-to-pay levels and the lowest open fare class. Using this information, sell-up is accounted for within the passenger type categories, making Rational Choice Forecasting less complex than Hybrid Forecasting. This thesis uses the Passenger Origin-Destination Simulator to analyze the impact of these forecasting and sell-up methods in a controlled, competitive airline environment. The simulation results indicate that determining an appropriate level of market sell-up aggregation through clustering both increases revenue and generates sell-up estimates with a sufficient number of observations. In addition, the findings show that Hybrid Forecasting creates aggressive forecasts that result in more low fare class closures, leaving room for not only sell-up, but for recapture and spill-in passengers in higher fare classes. On the contrary, Rational Choice Forecasting, while simpler than Hybrid Forecasting with sell-up estimation, consistently generates lower revenues than Hybrid Forecasting (but still better than standard pick-up forecasting). To gain a better understanding of why different markets are grouped into different clusters, this thesis uses regression analysis to determine the relationship between a market's characteristics and its estimated sell-up rate. These results indicate that several market factors, in addition to the actual historical bookings, may predict to some degree passenger willingness-to-pay within a market. Consequently, this research illustrates the importance of passenger willingness-to-pay estimation and its relationship to forecasting in airline revenue management.
by Christopher A. Boyer.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Wong, Chun-mei May, und 王春美. „Multilevel models for survival analysis in dental research“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B3637216X.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Yang, Kit-ling, und 楊潔玲. „Statistical analysis of temporal and spatial variations in suicide data“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B42841811.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Surujon, Defne. „Computational approaches in infectious disease research: Towards improved diagnostic methods“. Thesis, Boston College, 2020. http://hdl.handle.net/2345/bc-ir:109089.

Der volle Inhalt der Quelle
Annotation:
Thesis advisor: Kenneth Williams
Due to overuse and misuse of antibiotics, the global threat of antibiotic resistance is a growing crisis. Three critical issues surrounding antibiotic resistance are the lack of rapid testing, treatment failure, and evolution of resistance. However, with new technology facilitating data collection and powerful statistical learning advances, our understanding of the bacterial stress response to antibiotics is rapidly expanding. With a recent influx of omics data, it has become possible to develop powerful computational methods that make the best use of growing systems-level datasets. In this work, I present several such approaches that address the three challenges around resistance. While this body of work was motivated by the antibiotic resistance crisis, the approaches presented here favor generalization, that is, applicability beyond just one context. First, I present ShinyOmics, a web-based application that allow visualization, sharing, exploration and comparison of systems-level data. An overview of transcriptomics data in the bacterial pathogen Streptococcus pneumoniae led to the hypothesis that stress-susceptible strains have more chaotic gene expression patterns than stress-resistant ones. This hypothesis was supported by data from multiple strains, species, antibiotics and non-antibiotic stress factors, leading to the development of a transcriptomic entropy based, general predictor for bacterial fitness. I show the potential utility of this predictor in predicting antibiotic susceptibility phenotype, and drug minimum inhibitory concentrations, which can be applied to bacterial isolates from patients in the near future. Predictors for antibiotic susceptibility are of great value when there is large phenotypic variability across isolates from the same species. Phenotypic variability is accompanied by genomic diversity harbored within a species. I address the genomic diversity by developing BFClust, a software package that for the first time enables pan-genome analysis with confidence scores. Using pan-genome level information, I then develop predictors of essential genes unique to certain strains and predictors for genes that acquire adaptive mutations under prolonged stress exposure. Genes that are essential offer attractive drug targets, and those that are essential only in certain strains would make great targets for very narrow-spectrum antibiotics, potentially leading the way to personalized therapies in infectious disease. Finally, the prediction of adaptive outcome can lead to predictions of future cross-resistance or collateral sensitivities. Overall, this body of work exemplifies how computational methods can complement the increasingly rapid data generation in the lab, and pave the way to the development of more effective antibiotic stewardship practices
Thesis (PhD) — Boston College, 2020
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Biology
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Liang, Yiheng. „Computational Methods for Discovering and Analyzing Causal Relationships in Health Data“. Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804966/.

Der volle Inhalt der Quelle
Annotation:
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Thomas, Clifford S. „From 'tree' based Bayesian networks to mutual information classifiers : deriving a singly connected network classifier using an information theory based technique“. Thesis, University of Stirling, 2005. http://hdl.handle.net/1893/2623.

Der volle Inhalt der Quelle
Annotation:
For reasoning under uncertainty the Bayesian network has become the representation of choice. However, except where models are considered 'simple' the task of construction and inference are provably NP-hard. For modelling larger 'real' world problems this computational complexity has been addressed by methods that approximate the model. The Naive Bayes classifier, which has strong assumptions of independence among features, is a common approach, whilst the class of trees is another less extreme example. In this thesis we propose the use of an information theory based technique as a mechanism for inference in Singly Connected Networks. We call this a Mutual Information Measure classifier, as it corresponds to the restricted class of trees built from mutual information. We show that the new approach provides for both an efficient and localised method of classification, with performance accuracies comparable with the less restricted general Bayesian networks. To improve the performance of the classifier, we additionally investigate the possibility of expanding the class Markov blanket by use of a Wrapper approach and further show that the performance can be improved by focusing on the class Markov blanket and that the improvement is not at the expense of increased complexity. Finally, the two methods are applied to the task of diagnosing the 'real' world medical domain, Acute Abdominal Pain. Known to be both a different and challenging domain to classify, the objective was to investigate the optiniality claims, in respect of the Naive Bayes classifier, that some researchers have argued, for classifying in this domain. Despite some loss of representation capabilities we show that the Mutual Information Measure classifier can be effectively applied to the domain and also provides a recognisable qualitative structure without violating 'real' world assertions. In respect of its 'selective' variant we further show that the improvement achieves a comparable predictive accuracy to the Naive Bayes classifier and that the Naive Bayes classifier's 'overall' performance is largely due the contribution of the majority group Non-Specific Abdominal Pain, a group of exclusion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Jimenez, Castro Jorge Alfonso. „Analysis of data from field plot experiments using models for spatial covariance and yield response“. Thesis, University of Reading, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306463.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Keller, Rachel Elizabeth. „Failure to Reject the p-value is Not the Same as Accepting it: The Development, Validation, and Administration of the KPVMI Instrument“. Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100743.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study was to investigate on a national scale the baseline level of p-value fluency of future researchers (i.e., doctoral students). To that end, two research questions were investigated. The first research question, Can a sufficiently reliable and valid measure of p-value misinterpretations (in a research context) be constructed?, was addressed via the development and validation of the Keller P-value Misinterpretation Inventory instrument (KPVMI). An iterative process of expert review, pilot testing, and field testing resulted in an adequately reliable measure (Alpha = .8030) of p-value fluency as assessed across 18 misinterpretations and 2 process levels as well as an independently validated sub-measure of p-value fluency in context as assessed across 18 misinterpretations (Alpha = .8298). The second research question, What do the results of the KPVMI administration tell us about the current level of p-value fluency among doctoral students nationally?, was addressed via analysis of a subset of the field test data (n = 147) with respect to performance on the subset of items considered sufficiently validated as developed in Phases I-III (KPVMI-1). The median score was 10/18 items answered correctly indicating that future researchers on the aggregate struggle to properly interpret and report p-values in context; furthermore, there was insufficient evidence to indicate training and experience are positively correlated with performance. These results aligned with the extant literature regarding the p-value misinterpretations of practicing researchers.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Goodpaster, Aaron M. „Statistical Analysis Methods Development for Nuclear Magnetic Resonance and Liquid Chromatography/Mass Spectroscopy Based Metabonomics Research“. Miami University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=miami1312317652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Miller, Michael Chad. „Global Resource Management of Response Surface Methodology“. PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1621.

Der volle Inhalt der Quelle
Annotation:
Statistical research can be more difficult to plan than other kinds of projects, since the research must adapt as knowledge is gained. This dissertation establishes a formal language and methodology for designing experimental research strategies with limited resources. It is a mathematically rigorous extension of a sequential and adaptive form of statistical research called response surface methodology. It uses sponsor-given information, conditions, and resource constraints to decompose an overall project into individual stages. At each stage, a "parent" decision-maker determines what design of experimentation to do for its stage of research, and adapts to the feedback from that research's potential "children", each of whom deal with a different possible state of knowledge resulting from the experimentation of the "parent". The research of this dissertation extends the real-world rigor of the statistical field of design of experiments to develop an deterministic, adaptive algorithm that produces deterministically generated, reproducible, testable, defendable, adaptive, resource-constrained multi-stage experimental schedules without having to spend physical resource.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

So, Moon-tong, und 蘇滿堂. „Applications of Bayesian statistical model selection in social scienceresearch“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39312951.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Joshi, Shirish. „Statistical analysis and validation procedures under the common random number correlation induction strategy for multipopulation simulation experiments“. Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-02132009-170935/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Rendina-Gobioff, Gianna. „Detecting publication bias in random effects meta-analysis : an empirical comparison of statistical methods“. [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001494.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Chen, Runqiu, und 陳潤球. „Statistical validation of kidney deficiency syndromes (KDS) and the development of a KDS questionnaire in Hong Kong Chinese women aged 40-60 years“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43223813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chau, Ka-ki, und 周嘉琪. „Informative drop-out models for longitudinal binary data“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B2962714X.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Coad, D. Stephen. „Outcome-dependent randomisation schemes for clinical trials with fluctuations in patient characteristics“. Thesis, University of Oxford, 1989. http://ora.ox.ac.uk/objects/uuid:970a8103-24fc-496e-82c0-0645f2b4e9c4.

Der volle Inhalt der Quelle
Annotation:
A clinical trial is considered in which two treatments are to be compared. Treatment allocation schemes are usually designed to assign approximately equal numbers of patients to each treatment. The purpose of this thesis is to investigate the efficiency of estimation and the effect of instability in the response variable for allocation schemes which are aimed at reducing the number of patients who receive the inferior treatment. The general background to outcome-dependent allocation schemes is described in Chapter 1. A discussion of ethical and practical problems associated with these methods is presented together with brief details of actual trials conducted. In Chapter 2, the response to treatment is Bernoulli and the trial size is fixed. A simple method for estimating the treatment difference is proposed. Simulation results for a selection of allocation schemes indicate that the effect of instability upon the performance of the schemes can sometimes be substantial. A decision-theory approach is taken in Chapter 3. The trial is conducted in a number of stages and the interests of both the patients in the trial and those who will be treated after the end of the trial are taken into account. Using results for conditional normal distributions, analytical results are derived for estimation of the treatment difference for both a stable and an unstable normal response variable for three allocation schemes. Some results for estimation are also given for other responses. The problem of sequential testing is addressed in Chapter 4. With instability in the response variable, it is shown that the error probabilities for the test for a stable response variable can be approximately preserved by using a modified test statistic with appropriately-widened stopping boundaries. In addition, some recent results for estimation following sequential tests are outlined. Finally, the main conclusions of the thesis are highlighted in Chapter 5.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

李友榮 und Yau-wing Lee. „Modelling multivariate survival data using semiparametric models“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B4257528X.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Long, Yongxian, und 龙泳先. „Semiparametric analysis of interval censored survival data“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Fok, Carlotta Ching Ting 1973. „Approximating periodic and non-periodic trends in time-series data“. Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79765.

Der volle Inhalt der Quelle
Annotation:
Time-series data that reflect a periodic pattern are often used in psychology. In personality psychology, Brown and Moskowitz (1998) used spectral analysis to study whether fluctuations in the expression of four interpersonal behaviors show a cyclical pattern. Spline smoothing had also been used in the past to track the non-periodic trend, but no research has yet been done that combines spectral analysis and spline smoothing. The present thesis describes a new model which combines these two techniques to capture both periodic and non-periodic trends in the data.
The new model is then applied to Brown and Moskowitz's time-series data to investigate the long-term evolution to the four interpersonal behaviors, and to the GDP data to examine the periodic and non-periodic pattern for the GDP values of the 16 countries. Finally, the extent to which the model is accurate is tested using simulated data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Chan, Pui-shan, und 陳佩珊. „On the use of multiple imputation in handling missing values in longitudinal studies“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B45009879.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Michell, Justin Walter. „A review of generalized linear models for count data with emphasis on current geospatial procedures“. Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019989.

Der volle Inhalt der Quelle
Annotation:
Analytical problems caused by over-fitting, confounding and non-independence in the data is a major challenge for variable selection. As more variables are tested against a certain data set, there is a greater risk that some will explain the data merely by chance, but will fail to explain new data. The main aim of this study is to employ a systematic and practicable variable selection process for the spatial analysis and mapping of historical malaria risk in Botswana using data collected from the MARA (Mapping Malaria Risk in Africa) project and environmental and climatic datasets from various sources. Details of how a spatial database is compiled for a statistical analysis to proceed is provided. The automation of the entire process is also explored. The final bayesian spatial model derived from the non-spatial variable selection procedure using Markov Chain Monte Carlo simulation was fitted to the data. Winter temperature had the greatest effect of malaria prevalence in Botswana. Summer rainfall, maximum temperature of the warmest month, annual range of temperature, altitude and distance to closest water source were also significantly associated with malaria prevalence in the final spatial model after accounting for spatial correlation. Using this spatial model malaria prevalence at unobserved locations was predicted, producing a smooth risk map covering Botswana. The automation of both compiling the spatial database and the variable selection procedure proved challenging and could only be achieved in parts of the process. The non-spatial selection procedure proved practical and was able to identify stable explanatory variables and provide an objective means for selecting one variable over another, however ultimately it was not entirely successful due to the fact that a unique set of spatial variables could not be selected.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Hagen, Clinton Ernest. „Comparing the performance of four calculation methods for estimating the sample size in repeated measures clinical trials where difference in treatment groups means is of interest“. Oklahoma City : [s.n.], 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Nimon, Kim F. „Comparing outcome measures derived from four research designs incorporating the retrospective pretest“. Thesis, University of North Texas, 2007. https://digital.library.unt.edu/ark:/67531/metadc3931/.

Der volle Inhalt der Quelle
Annotation:
Over the last 5 decades, the retrospective pretest has been used in behavioral science research to battle key threats to the internal validity of posttest-only control-group and pretest-posttest only designs. The purpose of this study was to compare outcome measures resulting from four research design implementations incorporating the retrospective pretest: (a) pre-post-then, (b) pre-post/then, (c) post-then, and (d) post/then. The study analyzed the interaction effect of pretest sensitization and post-intervention survey order on two subjective measures: (a) a control measure not related to the intervention and (b) an experimental measure consistent with the intervention. Validity of subjective measurement outcomes were assessed by correlating resulting to objective performance measurement outcomes. A Situational Leadership® II (SLII) training workshop served as the intervention. The Work Involvement Scale of the self version of the Survey of Management Practices Survey served as the subjective control measure. The Clarification of Goals and Objectives Scale of the self version of the Survey of Management Practices Survey served as the subjective experimental measure. The Effectiveness Scale of the self version of the Leader Behavior Analysis II® served as the objective performance measure. This study detected differences in measurement outcomes from SLII participant responses to an experimental and a control measure. In the case of the experimental measure, differences were found in the magnitude and direction of the validity coefficients. In the case of the control measure, differences were found in the magnitude of the treatment effect between groups. These differences indicate that, for this study, the pre-post-then design produced the most valid results for the experimental measure. For the control measure in this study, the pre-post/then design produced the most valid results. Across both measures, the post/then design produced the least valid results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Beedell, David C. (David Charles). „The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll“. Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22720.

Der volle Inhalt der Quelle
Annotation:
Least squares linear regression is a common tool in ecological research. One of the central assumptions of least squares linear regression is that the independent variable is measured without error. But this variable is measured with error whenever it is a sample mean. The significance of such contraventions is not regularly assessed in ecological studies. A simulation program was made to provide such an assessment. The program requires a hypothetical data set, and using estimates of S$ sp2$ it scatters the hypothetical data to simulate the effect of sampling error. A regression line is drawn through the scattered data, and SSE and r$ sp2$ are measured. This is repeated numerous times (e.g. 1000) to generate probability distributions for r$ sp2$ and SSE. From these distributions it is possible to assess the likelihood of the hypothetical data resulting in a given SSE or r$ sp2$. The method was applied to survey data used in a published TP-CHLa regression (Pace 1984). Beginning with a hypothetical, linear data set (r$ sp2$ = 1), simulated scatter due to sampling exceeded the SSE from the regression through the survey data about 30% of the time. Thus chances are 3 out of 10 that the level of uncertainty found in the surveyed TP-CHLa relationship would be observed if the true relationship were perfectly linear. If this is so, more precise and more comprehensive models will only be possible when better estimates of the means are available. This simulation approach should apply to all least squares regression studies that use sampled means, and should be especially relevant to studies that use log-transformed values.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Novák, Marek. „Zadání a statistické řešení výzkumné úlohy“. Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-10407.

Der volle Inhalt der Quelle
Annotation:
This thesis is intent on the introduction to problems of statistical approach to research tasks. It focuses on research assignments, position of research worker and statistician while analyzing, ways of gathering data files and problems connected with them, main types of multivariate statistical methods and possible views of their classification. Moreover, this work includes overview of examples of research assignments, possibilities of their solutions and related data files. First chapter describes statistical approach to the research assignments, and the second one shows concrete examples of these assignments. The enclosed CD includes data files to most of the statistical examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Hale, Richard Elliot. „Quantifying accuracy of measurements in the earth sciences by examination of residuals in statistically redundant observations“. Thesis, Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37687438.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Gilbride, Timothy J. „Models for heterogeneous variable selection“. Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1083591017.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xii, 138 p.; also includes graphics. Includes abstract and vita. Advisor: Greg M. Allenby, Dept. of Business Admnistration. Includes bibliographical references (p. 134-138).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Majeke, Lunga. „Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data“. Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Der volle Inhalt der Quelle
Annotation:
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Santos, Herivelto Tiago Marcondes dos. „Padrões de investimentos das empresas de eletricidade em programas de pesquisa e desenvolvimento tecnologico e em eficiencia energetica“. [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265588.

Der volle Inhalt der Quelle
Annotation:
Orientador: Gilberto De Martino Jannuzzi
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-09T12:30:01Z (GMT). No. of bitstreams: 1 Santos_HeriveltoTiagoMarcondesdos_M.pdf: 831089 bytes, checksum: 0c9db7d62f8e506686cc6b9059f1eb4f (MD5) Previous issue date: 2006
Resumo: Este trabalho apresenta uma avaliação dos investimentos feitos pelas empresas brasileiras de eletricidade em programas de P&D e em eficiência energética, segundo a imposição da lei federal 9.991/00. Conforme esta lei, as empresas de eletricidade são obrigadas a investirem nesses programas, no mínimo, 1% de suas receitas operacionais líquidas anuais. Com isso, essas empresas foram avaliadas segundo as suas participações no mercado de eletricidade sobre os investimentos em tipos de projetos (eficiência energética) ou tópicos tecnológicos (P&D). A avaliação foi feita usando um índice de concentração de mercado e, nesse caso, utilizou-se o índice Herfindahl ¿ Hirschman, como também por meio de estatísticas descritivas dos programas desenvolvidos. Como parte das conclusões observou-se que as empresas de eletricidade apresentaram alguns padrões de investimentos os quais podem ser utilizados como referência aos futuros programas de P&D e de eficiência energética
Abstract: This work presents one assessment of investments to carry out on research and technological development (R&D) and energy efficiency programs according to impose of federal act 9.991/00. It is to determine that the electricity utilities to invest 1% of annual utilities revenues on those programs. Thus electricity utilities were evaluated according to its market quota on the investments in project type (energy efficiency) or technological topic (R&D). The assessment was realized through of concentration market index and it was utilized the Herfindahl ¿ Hirschman index and the Descriptives Statistics of programs. As part of conclusions was to verify that electricity utilities is to present some investment standard that to be reference to the futures programs in R&D and energy efficiency
Mestrado
Mestre em Planejamento de Sistemas Energéticos
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Лабабиди, М. Р., und M. R. Lababidi. „Механизм принятия управленческих решений на промышленных предприятиях : магистерская диссертация“. Master's thesis, б. и, 2021. http://hdl.handle.net/10995/100716.

Der volle Inhalt der Quelle
Annotation:
Принятие оптимального управленческого решения является одним из самых сложных обязанностей руководителей предприятия, поскольку с ростом неопределенности и количества независимых переменных решаемой проблемы, решения становятся более сложными, что требует надежных методов, помогающих менеджерам сделать более разумный выбор среди альтернативных вариантов действий. Целью магистерской диссертации является разработка теоретических и методических подходов к повышению эффективности процессов принятия управленческих решений на основе математических и статистических методов. В работе рассматривается важность использования математических и статистических методов как информационно-аналитической основы выбора и принятия оптимального решения. В качестве источников использовалась научно-исследовательская и методическая литература, и финансовая отчетность организаций в открытом доступе. В магистерской диссертации был предложен алгоритм принятия управленческих решений на промышленных предприятиях, предполагающий использование разработанного автором алгоритма принятия управленческих решений, основным элементом которого является использование математических и статистических методов как информационно-аналитической основы выбора и принятия оптимального решения, что позволяет повысить эффективность процесса принятия управленческих решений на промышленных предприятиях.
Making an optimal management decision is one of the most difficult responsibilities of enterprise managers, because with the growth of uncertainty and the number of independent variables of the problem being solved, decisions become more complex, which requires reliable methods that help managers make more reasonable choices among alternative options for action. The aim of the master's thesis is to develop theoretical and methodological approaches to improving the efficiency of management decision-making processes based on mathematical and statistical methods. The paper considers the importance of using mathematical and statistical methods as an information and analytical basis for choosing and making an optimal decision. Research and methodological literature and publicly available financial statements of organizations were used as sources. In the master's thesis, an algorithm for management decision-making at industrial enterprises was proposed, involving the use of an algorithm for management decision-making developed by the author, the main element of which is the use of mathematical and statistical methods as an information and analytical basis for choosing and making an optimal decision, which makes it possible to increase the efficiency of the management decision-making process at industrial plants.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Marmara, Vincent Anthony. „Prediction of Infectious Disease outbreaks based on limited information“. Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/24624.

Der volle Inhalt der Quelle
Annotation:
The last two decades have seen several large-scale epidemics of international impact, including human, animal and plant epidemics. Policy makers face health challenges that require epidemic predictions based on limited information. There is therefore a pressing need to construct models that allow us to frame all available information to predict an emerging outbreak and to control it in a timely manner. The aim of this thesis is to develop an early-warning modelling approach that can predict emerging disease outbreaks. Based on Bayesian techniques ideally suited to combine information from different sources into a single modelling and estimation framework, I developed a suite of approaches to epidemiological data that can deal with data from different sources and of varying quality. The SEIR model, particle filter algorithm and a number of influenza-related datasets were utilised to examine various models and methodologies to predict influenza outbreaks. The data included a combination of consultations and diagnosed influenza-like illness (ILI) cases for five influenza seasons. I showed that for the pandemic season, different proxies lead to similar behaviour of the effective reproduction number. For influenza datasets, there exists a strong relationship between consultations and diagnosed datasets, especially when considering time-dependent models. Individual parameters for different influenza seasons provided similar values, thereby offering an opportunity to utilise such information in future outbreaks. Moreover, my findings showed that when the temperature drops below 14°C, this triggers the first substantial rise in the number of ILI cases, highlighting that temperature data is an important signal to trigger the start of the influenza epidemic. Further probing was carried out among Maltese citizens and estimates on the under-reporting rate of the seasonal influenza were established. Based on these findings, a new epidemiological model and framework were developed, providing accurate real-time forecasts with a clear early warning signal to the influenza outbreak. This research utilised a combination of novel data sources to predict influenza outbreaks. Such information is beneficial for health authorities to plan health strategies and control epidemics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sales, Filho Nazime 1986. „Análise qualitativa de um modelo de propagação de dengue para populações espacialmente homogêneas“. [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306772.

Der volle Inhalt der Quelle
Annotation:
Orientador: Bianca Morelli Rodolfo Calsavara
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-26T15:10:31Z (GMT). No. of bitstreams: 1 SalesFilho_Nazime_M.pdf: 36937949 bytes, checksum: 4ceff2992bbc8648a89104715aac602e (MD5) Previous issue date: 2015
Resumo: Neste trabalho será analisado um modelo matemático que descreve a propagação da dengue. Tal modelo é dado por um sistema de equações diferenciais ordinárias não lineares sujeitas a condições iniciais, que descreve duas populações: a de mosquitos e a humana. A população de mosquitos é dividida em duas subpopulações: fase aquática, incluindo os ovos, larvas e pupas, e fase alada, que é subdividida em mosquitos suscetíveis e infectados. A população humana é dividida em subpopulações de suscetíveis, infectados e removidos. No modelo citado é assumido que a população de mosquito e a população humana atingiram homogeneidade espacial, isto é, não há movimentação destas populações influenciando na disseminação da doença. O principal interesse neste trabalho é analisar qualitativamente o comportamento das populações em torno dos pontos de equilíbrio do sistema. Para este fim, além do uso de ferramentas analíticas também foram realizadas simulações numéricas utilizando o software Maple. Dessa forma foi possível obter informações sobre a disseminação da dengue, sob algumas hipóteses, mesmo sem obtermos solução explícita do sistema
Abstract: In this work it will be analyzed a mathematical model describing propagation of dengue disease. This model is given by a system of nonlinear ordinary differential equations, subjected to initial conditions, involving two populations: one of mosquitos and another of humans. The mosquitos population is divided in two subpopulations: the aquatic phase, including eggs, larvae and pupae, and the winged phase, that is divided in susceptible and infected mosquitos. The human population is divided in subpopulations of susceptible, infected and removed. In the cited model it is assumed that the mosquito and human populations achieved spatial homogeneity, i.e., there is no movement of these populations affecting the disease dissemination. The main interest of this work is to analyze qualitatively the populations behavior around the equilibrium points of the system. To this end, in addition to the use of analytical tools, numerical simulations were performed by using Maple software. In this way, it was possible to obtain information about dengue dissemination, under some hypotheses, even without obtaining explicit solution for the system
Mestrado
Matematica Aplicada e Computacional
Mestre em Matemática Aplicada e Computacional
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Borodin, Valeria. „Optimisation et simulation d'une chaîne logistique : application au secteur de l'agriculture“. Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0034/document.

Der volle Inhalt der Quelle
Annotation:
Dans le cadre de la thèse portant sur l’optimisation et simulation de la chaîne logistique agricole, c'est l'activité de collecte qui est concernée, la période de moisson étant primordiale en matière de quantité et qualité de production, i.e. des revenus pour les agri-culteurs et des richesses pour le territoire. Plus spécifiquement, celle-ci implique les opérations de récolte, transport et stockage des céréales, réalisées par plusieurs exploitations agricoles, dispersées géographiquement. En vue d'aborder la complexité et la nature dyna-mique de la chaîne logistique d’une coopérative agricole française dans son intégralité, nous avons développé un système d'aide à la décision, qui s'inscrit dans la cadre de la recherche opérationnelle (RO) et plus précisément se réfère à l'optimisation linéaire, robuste et stochastique; la simulation de flux à évènements discrets; ainsi qu'à leur couplage. De plus, la synergie créée entre les outils de la RO, le système d’information géographique, la statistique inférentielle et prédictive rend le système d'aide à la décision compétitif et performant, capable de répondre convenablement au besoin de l’industriel
To overcome the new challenges facing agricultural sector, imposed by globalisation, changing market demands and price instability, the crop production supply chain must particularly be very reactive, flexible, with a high yield and at low cost. Its improving and eventual re-configuration can lead to an upgrade in efficiency, responsiveness, business integration and make it able to confront the market competitiveness. The thesis is thus placed in this particular context and aims to support decision making in crop harvesting activity, which is considered the pivotal stage in the cereal production circuit owing to its high cost and impact on the returns earned. Managing the harvest activity involves gathering, transportation and storage operations, performed by a collection of agricultural holdings geographically dispersed. Besides its practical relevance, this thesis forms part of the Operational Research (OR) and more specifically, refers to the linear and stochastic programming, discrete event simulation, and their coupling. In addition, the synergy created between OR, inferential and predictive statistics, geographical information system tools makes the decision support system competitive, efficient and responsive
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Oteniya, Lloyd. „Bayesian belief networks for dementia diagnosis and other applications : a comparison of hand-crafting and construction using a novel data driven technique“. Thesis, University of Stirling, 2008. http://hdl.handle.net/1893/497.

Der volle Inhalt der Quelle
Annotation:
The Bayesian network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model. We have conducted a literature review of the expert-driven approach, and we have cherry-picked a number of common methods, and engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed; in particular, algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables --- an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance. More recent approaches attempt to ''learn'' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures --- we present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a ''real-life'' medical domain, dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Makulube, Mzamo. „Bioequivalence tests based on individual estimates using non-compartmental or model-based analysis“. Thesis, 2019. https://hdl.handle.net/10539/29370.

Der volle Inhalt der Quelle
Annotation:
A research report submitted in partial fulfilment of Mathematical Statistics Masters by Coursework and Research Report to the Faculty of Science, University of the Witwatersrand, Johannesburg, 2019
The growing demand for generic drugs has led to an increase in the generic drug industry. As a result, there has been a growing demand for bioequivalence studies. The challenges with the bioequivalence studies arose with the method used to quantify bioavailability. Bioavailability is commonly estimated by the area under the concentration-time curve (AUC), which is traditionally estimated by Non-Compartmental Analysis (NCA) such as interpolation in aid of the trapezoidal rule. However, when the number of samples per subject is insufficient, the NCA estimates may be biased and this can result in incorrect conclusions about bioequivalence. Alternatively, AUC can be estimated by the Non-Linear Mixed Effect Model (NLMEM). The objective of this study is to evaluate bioequivalence on lnAUC estimated by using a NCA approach to those based on the lnAUC estimated by the NLMEM approach. The NCA and NLMEM approaches are compared on the resulting bias when the linear mixed effect model is used to analyse the lnAUC data estimated by each method. The methods are evaluated on simulated and real data. The 2x2 crossover designs of different sample sizes and sampling time intensities are simulated using two null hypotheses. In each crossover design, concentration profiles are simulated with different levels of between-subject variability, within-subject variability and residual error variance. A higher bias is obtained with the lnAUC estimated by the NCA approach for trials with a limited number of samples per subject. The NCA estimates provide satisfactory global TypeI-error results. The NLMEM fails to distinguish between the existing formulation differences when the residual variability is high.
TL (2020)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Palmer, Cameron Douglas. „Developing Statistical Methods for Incorporating Complexity in Association Studies“. Thesis, 2017. https://doi.org/10.7916/D8SQ9BX2.

Der volle Inhalt der Quelle
Annotation:
Genome-wide association studies (GWAS) have identified thousands of genetic variants associated with hundreds of human traits. Yet the common variant model tested by traditional GWAS only provides an incomplete explanation for the known genetic heritability of many traits. Many divergent methods have been proposed to address the shortcomings of GWAS, including most notably the extension of association methods into rarer variants through whole exome and whole genome sequencing. GWAS methods feature numerous simplifications designed for feasibility and ease of use, as opposed to statistical rigor. Furthermore, no systematic quantification of the performance of GWAS across all traits exists. Beyond improving the utility of data that already exist, a more thorough understanding of the performance of GWAS on common variants may elucidate flaws not in the method but rather in its implementation, which may pose a continued or growing threat to the utility of rare variant association studies now underway. This thesis focuses on systematic evaluation and incremental improvement of GWAS modeling. We collect a rich dataset containing standardized association results from all GWAS conducted on quantitative human traits, finding that while the majority of published significant results in the field do not disclose sufficient information to determine whether the results are actually valid, those that do replicate precisely in concordance with their statistical power when conducted in samples of similar ancestry and reporting accurate per-locus sample sizes. We then look to the inability of effectively all existing association methods to handle missingness in genetic data, and show that adapting missingness theory from statistics can both increase power and provide a flexible framework for extending most existing tools with minimal effort. We finally undertake novel variant association in a schizophrenia cohort from a bottleneck population. We find that the study itself is confounded by nonrandom population sampling and identity-by-descent, manifesting as batch effects correlated with outcome that remain in novel variants after all sample-wide quality control. On the whole, these results emphasize both the past and present utility and reliability of the GWAS model, as well as the extent to which lessons from the GWAS era must inform genetic studies moving forward.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie