Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Sampling with varying probabilities.

Rozprawy doktorskie na temat „Sampling with varying probabilities”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 46 najlepszych rozpraw doktorskich naukowych na temat „Sampling with varying probabilities”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Dimy, Anguima Ibondzi Herve. "Estimation of Limiting Conditional Probabilities for Regularly Varying Time Series". Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30906.

Pełny tekst źródła
Streszczenie:
In this thesis we are concerned with estimation of clustering probabilities for univariate heavy tailed time series. We employ functional convergence of a bivariate tail empirical process to conclude asymptotic normality of an estimator of the clustering probabilities. Theoretical results are illustrated by simulation studies.
Style APA, Harvard, Vancouver, ISO itp.
2

Datta, Srabosti. "POWER REDUCTION BY DYNAMICALLY VARYING SAMPLING RATE". UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_theses/275.

Pełny tekst źródła
Streszczenie:
In modern digital audio applications, a continuous audio signal stream is sampled at a fixed sampling rate, which is always greater than twice the highest frequency of the input signal, to prevent aliasing. A more energy efficient approach is to dynamically change the sampling rate based on the input signal. In the dynamic sampling rate technique, fewer samples are processed when there is little frequency content in the samples. The perceived quality of the signal is unchanged in this technique. Processing fewer samples involves less computation work; therefore processor speed and voltage can be reduced. This reduction in processor speed and voltage has been shown to reduce power consumption by up to 40% less than if the audio stream had been run at a fixed sampling rate.
Style APA, Harvard, Vancouver, ISO itp.
3

Liao, Yijie. "Testing of non-unity risk ratio under inverse sampling". HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/707.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Datta, Srabosti. "Power reduction by dynmically [sic] varying sampling rate". Lexington, Ky. : [University of Kentucky Libraries], 2006. http://lib.uky.edu/ETD/ukyelen2006t00492/thesis.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--University of Kentucky, 2006.
Title from document title page (viewed on October 31, 2006). Document formatted into pages; contains: viii, 70 p. : ill. (some col.). Includes abstract and vita. Includes bibliographical references (p. 66-69).
Style APA, Harvard, Vancouver, ISO itp.
5

Gulez, Taner. "Weapon-target Allocation And Scheduling For Air Defense With Time Varying Hit Probabilities". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608471/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis, mathematical modeling and heuristic approaches are developed for surface-to-air weapon-target allocation problem with time varying single shot hit probabilities (SSHP) against linearly approaching threats. First, a nonlinear mathematical model for the problem is formulated to maximize sum of the weighted survival probabilities of assets to be defended. Next, nonlinear objective function and constraints are linearized. Time varying SSHP values are approximated with appropriate closed forms and adapted to the linear model obtained. This model is tested on different scenarios and results are compared with those of the original nonlinear model. It is observed that the linear model is solved much faster than the nonlinear model and produces reasonably good solutions. It is inferred from the solutions of both models that engagements should be made as late as possible, when the threats are closer to the weapons, to have SSHP values higher. A construction heuristic is developed based on this scheme. An improvement heuristic that uses the solution of the construction heuristic is also proposed. Finally, all methods are tested on forty defense scenarios. Two fastest solution methods, the linear model and the construction heuristic, are compared on a large scenario and proposed as appropriate solution techniques for the weapon-target allocation problems.
Style APA, Harvard, Vancouver, ISO itp.
6

Myhrer, Øystein. "Joint Default Probabilities: A Model with Time-varying and Correlated Sharpe Ratios and Volatilities". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for samfunnsøkonomi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-17400.

Pełny tekst źródła
Streszczenie:
The probabilities of joint default among companies are one of the major concerns in credit risk management, mainly because it aects the distribution of loan portfolio losses and is therefore critical when allocating capital for solvency purposes. This paper proposes a multivariate model with time-varying and correlated Sharpe ratios and volatilities for the value of the rms, calibrated to t sample averages between and within the rating categories A and Ba. We found that, in the standard Merton framework, the model performs well with one average A-rated rm and one average Ba-rated rm and with two average Ba-rated rms when the joint default probabilities are compared with similar empirical probabilities.
Style APA, Harvard, Vancouver, ISO itp.
7

Schelin, Lina. "Spatial sampling and prediction". Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-53286.

Pełny tekst źródła
Streszczenie:
This thesis discusses two aspects of spatial statistics: sampling and prediction. In spatial statistics, we observe some phenomena in space. Space is typically of two or three dimensions, but can be of higher dimension. Questions in mind could be; What is the total amount of gold in a gold-mine? How much precipitation could we expect in a specific unobserved location? What is the total tree volume in a forest area? In spatial sampling the aim is to estimate global quantities, such as population totals, based on samples of locations (papers III and IV). In spatial prediction the aim is to estimate local quantities, such as the value at a single unobserved location, with a measure of uncertainty (papers I, II and V). In papers III and IV, we propose sampling designs for selecting representative probability samples in presence of auxiliary variables. If the phenomena under study have clear trends in the auxiliary space, estimation of population quantities can be improved by using representative samples. Such samples also enable estimation of population quantities in subspaces and are especially needed for multi-purpose surveys, when several target variables are of interest. In papers I and II, the objective is to construct valid prediction intervals for the value at a new location, given observed data. Prediction intervals typically rely on the kriging predictor having a Gaussian distribution. In paper I, we show that the distribution of the kriging predictor can be far from Gaussian, even asymptotically. This motivated us to propose a semiparametric method that does not require distributional assumptions. Prediction intervals are constructed from the plug-in ordinary kriging predictor. In paper V, we consider prediction in the presence of left-censoring, where observations falling below a minimum detection limit are not fully recorded. We review existing methods and propose a semi-naive method. The semi-naive method is compared to one model-based method and two naive methods, all based on variants of the kriging predictor.
Style APA, Harvard, Vancouver, ISO itp.
8

李泉志 i Chuen-chi Lee. "The control of a varying gain process using a varying sampling-rate PID controller with application to pH control". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1993. http://hub.hku.hk/bib/B31211598.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lee, Chuen-chi. "The control of a varying gain process using a varying sampling-rate PID controller with application to pH control /". [Hong Kong] : University of Hong Kong, 1993. http://sunzi.lib.hku.hk/hkuto/record.jsp?B1366573X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Peng, Linghua. "Normalizing constant estimation for discrete distribution simulation /". Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Lundquist, Anders. "Contributions to the theory of unequal probability sampling". Doctoral thesis, Umeå : Department of Mathematics and Mathematical Statistics, Umeå University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22459.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Wilson, Celia M. "Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability". Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30528/.

Pełny tekst źródła
Streszczenie:
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this case Rc2. These results also demonstrated the role between and within set correlation, variable set size, and sample size played in the attenuation levels of the squared canonical correlation coefficient.
Style APA, Harvard, Vancouver, ISO itp.
13

Shen, Gang. "Bayesian predictive inference under informative sampling and transformation". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-142754/.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Ignorable Model; Transformation; Poisson Sampling; PPS Sampling; Gibber Sampler; Inclusion Probabilities; Selection Bias; Nonignorable Model; Bayesian Inference. Includes bibliographical references (p.34-35).
Style APA, Harvard, Vancouver, ISO itp.
14

Grafström, Anton. "On unequal probability sampling designs". Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33701.

Pełny tekst źródła
Streszczenie:
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
Style APA, Harvard, Vancouver, ISO itp.
15

Thajeel, Jawad. "Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying Soils". Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4111/document.

Pełny tekst źródła
Streszczenie:
L’analyse probabiliste des ouvrages géotechniques est généralement réalisée en utilisant la méthode de simulation de Monte Carlo. Cette méthode n’est pas adaptée pour le calcul des faibles probabilités de rupture rencontrées dans la pratique car elle devient très coûteuse dans ces cas en raison du grand nombre de simulations requises pour obtenir la probabilité de rupture. Dans cette thèse, nous avons développé trois méthodes probabilistes (appelées AK-MCS, AK-IS et AK-SS) basées sur une méthode d’apprentissage (Active learning) et combinant la technique de Krigeage et l’une des trois méthodes de simulation (i.e. Monte Carlo Simulation MCS, Importance Sampling IS ou Subset Simulation SS). Dans AK-MCS, la population est prédite en utilisant un méta-modèle de krigeage qui est défini en utilisant seulement quelques points de la population, ce qui réduit considérablement le temps de calcul par rapport à la méthode MCS. Dans AK-IS, une technique d'échantillonnage plus efficace 'IS' est utilisée. Dans le cadre de cette approche, la faible probabilité de rupture est estimée avec une précision similaire à celle de AK-MCS, mais en utilisant une taille beaucoup plus petite de la population initiale, ce qui réduit considérablement le temps de calcul. Enfin, dans AK-SS, une technique d'échantillonnage plus efficace 'SS' est proposée. Cette technique ne nécessite pas la recherche de points de conception et par conséquent, elle peut traiter des surfaces d’état limite de forme arbitraire. Toutes les trois méthodes ont été appliquées au cas d'une fondation filante chargée verticalement et reposant sur un sol spatialement variable. Les résultats obtenus sont présentés et discutés
The probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed
Style APA, Harvard, Vancouver, ISO itp.
16

Tanaka, Shunji. "Studies on Sampled-Data Control Systems -the H[∞] Problem of Discrete Linear Periodically Time-Varying Systems and Nonuniform Sampling Problems". Kyoto University, 1999. http://hdl.handle.net/2433/77928.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Cancino, Cancino Jorge Orlando. "Analyse und praktische Umsetzung unterschiedlicher Methoden des Randomized Branch Sampling". Doctoral thesis, Göttingen, 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=969133375.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Vengattaramane, Kameswaran. "Efficient Reconstruction of Two-Periodic Nonuniformly Sampled Signals Applicable to Time-Interleaved ADCs". Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-6253.

Pełny tekst źródła
Streszczenie:

Nonuniform sampling occurs in many practical applications either intentionally or unintentionally. This thesis deals with the reconstruction of two-periodic nonuniform signals which is of great importance in two-channel time-interleaved analog-to-digital converters. In a two-channel time-interleaved ADC, aperture delay mismatch between the channels gives rise to a two-periodic nonuniform sampling pattern, resulting in distortion and severely affecting the linearity of the converter. The problem is solved by digitally recovering a uniformly sampled sequence from a two-periodic nonuniformly sampled set. For this purpose, a time-varying FIR filter is employed. If the sampling pattern is known and fixed, this filter can be designed in an optimal way using least-squares or minimax design. When the sampling pattern changes now and then as during the normal operation of time-interleaved ADC, these filters have to be redesigned. This has implications on the implementation cost as general on-line design is cumbersome. To overcome this problem, a novel time-varying FIR filter with polynomial impulse response is developed and characterized in this thesis. The main advantage with these filters is that on-line design is no longer needed. It now suffices to perform only one design before implementation and in the implementation it is enough to adjust only one variable parameter when the sampling pattern changes. Thus the high implementation cost is decreased substantially.

Filter design and the associated performance metrics have been validated using MATLAB. The design space has been explored to limits imposed by machine precision on matrix inversions. Studies related to finite wordlength effects in practical filter realisations have also been carried out. These formulations can also be extended to the general M - periodic nonuniform sampling case.

Style APA, Harvard, Vancouver, ISO itp.
19

Ciccarelli, Matteo. "Bayesian interference in heterogeneous dynamic panel data models: three essays". Doctoral thesis, Universitat Pompeu Fabra, 2001. http://hdl.handle.net/10803/31792.

Pełny tekst źródła
Streszczenie:
The task of this work is to discuss issues conceming the specification, estimation, inference and forecasting in multivariate dynamic heterogeneous panel data models from a Bayesian perspective. Three essays linked by a few conraion ideas compose the work. Multivariate dynamic models (mainly VARs) based on micro or macro panel data sets have become increasingly popular in macroeconomics, especially to study the transmission of real and monetary shocks across economies. This great use of the panel VAR approach is largely justified by the fact that it allows the docimientation of the dynamic impact of shocks on key macroeconomic variables in a framework that simultaneously considers shocks emanating from the global enviromnent (world interest rate, terms of trade, common monetary shock) and those of domestic origin (supply shocks, fiscal and monetary policy, etc.). Despite this empirical interest, the theory for panel VAR is somewhat underdeveloped. The aim of the thesis is to shed more light on the possible applications of the Bayesian framework in discussing estimation, inference, and forecasting using multivariate dynamic models where, beside the time series dimensión we can also use the information contained in the cross sectional dimensión. The Bayesian point of view provides a natural environment for the models dlscussed in this work, due to its flexibility in combining diíferent sources of information. Moreover, it has been recently shown that Bayes estimates of hierachical dynamic panel data models have a reduced small sample bias, and help in improving the forecasting performance of these models.
Style APA, Harvard, Vancouver, ISO itp.
20

Peñarrocha, Alós Ignacio. "Sensores virtuales para procesos con medidas escasas y retardos temporales". Doctoral thesis, Universitat Politècnica de València, 2008. http://hdl.handle.net/10251/3882.

Pełny tekst źródła
Streszczenie:
En este trabajo se aborda el problema de controlar un proceso cuya salida se muestrea de forma irregular. Para ello se propone utilizar un predictor que estima las salidas del proceso en instantes regulares de tiempo más un controlador convencional que calcula la acción de control a partir de las estimaciones del predictor (técnica conocida como control inferencial). La predicción consiste en estimar las variables de salida que se desean controlar a partir de las mediciones realizadas con diversos sensores utilizando para ello un modelo matemático del proceso. El filtro de Kalman permite hacer la predicción de forma óptima si las perturbaciones tienen una distribución gaussiana de media cero, pero con el inconveniente de requerir un elevado coste computacional cuando se utilizan diferentes sensores con retardos temporales variantes. En este trabajo se propone una estrategia de predicción alternativa de bajo coste computacional cuyo diseño se basa en el conocimiento de la disponibilidad de mediciones y de los retardos (del proceso, del sistema de medición o del sistema de transmisión de datos) y de la naturaleza de las perturbaciones. Los predictores propuestos minimizan el error de predicción frente al muestreo aleatorio con retardos variantes, perturbaciones, ruido de medida, error de modelado, retardos en la acción de control e incertidumbre en los tiempos de medición. Las diferentes estrategias de diseño que se proponen se clasifican según el tipo de información que se dispone de las perturbaciones y del coste computacional requerido. Se han planteado los diseños para sistemas monovariables, multivariables, lineales y no lineales. Asimismo, también se ha elaborado una forma más eficiente de incluir mediciones escasas con retardo en el filtro de Kalman, con el objetivo de reducir el coste computacional de la predicción. En este trabajo se demuestra que los sistemas de control inferencial que utilizan los predictores propuestos cumplen con el principio de sep
Peñarrocha Alós, I. (2006). Sensores virtuales para procesos con medidas escasas y retardos temporales [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/3882
Palancia
Style APA, Harvard, Vancouver, ISO itp.
21

Ráček, Tomáš. "Rychlé číslicové filtry pro signály EKG". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219241.

Pełny tekst źródła
Streszczenie:
In the thesis there are described the implementations of various types of filters to remove disturbing signals, which often degrade the ECG signal. In particular, it is a zero isoline fluctuations and power network interference. It is used a principle of the Lynn’s linear filters. The individual filters are designed in a recursive and non-recursive implementation. Then there is described a time-varying linear Lynn's filter for removing drift of zero isoline signal. The thesis also includes filters with minimized calculating time of response, by sampling rate conversion method for both interference types. In conclusion there is an experimental study of the filter implementation for ECG signal with false and real interferences.
Style APA, Harvard, Vancouver, ISO itp.
22

Fiter, Christophe. "Contribution à la commande robuste des systèmes à échantillonnage variable ou contrôlé". Phd thesis, Ecole Centrale de Lille, 2012. http://tel.archives-ouvertes.fr/tel-00773127.

Pełny tekst źródła
Streszczenie:
Cette thèse est dédiée à l'analyse de stabilité des systèmes à pas d'échantillonnage variable et à la commande dynamique de l'échantillonnage. L'objectif est de concevoir des lois d'échantillonnage permettant de réduire la fréquence d'actualisation de la commande par retour d'état, tout en garantissant la stabilité du système.Tout d'abord, un aperçu des récents défis et axes de recherche sur les systèmes échantillonnés est présenté. Ensuite, une nouvelle approche de contrôle dynamique de l'échantillonnage, "échantillonnage dépendant de l'état", est proposée. Elle permet de concevoir hors-ligne un échantillonnage maximal dépendant de l'état défini sur des régions coniques de l'espace d'état, grâce à des LMIs.Plusieurs types de systèmes sont étudiés. Tout d'abord, le cas de système LTI idéal est considéré. La fonction d'échantillonnage est construite au moyen de polytopes convexes et de conditions de stabilité exponentielle de type Lyapunov-Razumikhin. Ensuite, la robustesse vis-à-vis des perturbations est incluse. Plusieurs applications sont proposées: analyse de stabilité robuste vis-à-vis des variations du pas d'échantillonnage, contrôles event-triggered et self-triggered, et échantillonnage dépendant de l'état. Enfin, le cas de système LTI perturbé à retard est traité. La construction de la fonction d'échantillonnage est basée sur des conditions de stabilité L2 et sur un nouveau type de fonctionnelles de Lyapunov-Krasovskii avec des matrices dépendant de l'état. Pour finir, le problème de stabilisation est traité, avec un nouveau contrôleur dont les gains commutent en fonction de l'état du système. Un co-design contrôleur/fonction d'échantillonnage est alors proposé
Style APA, Harvard, Vancouver, ISO itp.
23

Stevenson, Clint W. "A Logistic Regression Analysis of Utah Colleges Exit Poll Response Rates Using SAS Software". BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1116.

Pełny tekst źródła
Streszczenie:
In this study I examine voter response at an interview level using a dataset of 7562 voter contacts (including responses and nonresponses) in the 2004 Utah Colleges Exit Poll. In 2004, 4908 of the 7562 voters approached responded to the exit poll for an overall response rate of 65 percent. Logistic regression is used to estimate factors that contribute to a success or failure of each interview attempt. This logistic regression model uses interviewer characteristics, voter characteristics (both respondents and nonrespondents), and exogenous factors as independent variables. Voter characteristics such as race, gender, and age are strongly associated with response. An interviewer's prior retail sales experience is associated with whether a voter will decide to respond to a questionnaire or not. The only exogenous factor that is associated with voter response is whether the interview occurred in the morning or afternoon.
Style APA, Harvard, Vancouver, ISO itp.
24

Dubourg, Vincent. "Méta-modèles adaptatifs pour l'analyse de fiabilité et l'optimisation sous contrainte fiabiliste". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00697026.

Pełny tekst źródła
Streszczenie:
Cette thèse est une contribution à la résolution du problème d'optimisation sous contrainte de fiabilité. Cette méthode de dimensionnement probabiliste vise à prendre en compte les incertitudes inhérentes au système à concevoir, en vue de proposer des solutions optimales et sûres. Le niveau de sûreté est quantifié par une probabilité de défaillance. Le problème d'optimisation consiste alors à s'assurer que cette probabilité reste inférieure à un seuil fixé par les donneurs d'ordres. La résolution de ce problème nécessite un grand nombre d'appels à la fonction d'état-limite caractérisant le problème de fiabilité sous-jacent. Ainsi,cette méthodologie devient complexe à appliquer dès lors que le dimensionnement s'appuie sur un modèle numérique coûteux à évaluer (e.g. un modèle aux éléments finis). Dans ce contexte, ce manuscrit propose une stratégie basée sur la substitution adaptative de la fonction d'état-limite par un méta-modèle par Krigeage. On s'est particulièrement employé à quantifier, réduire et finalement éliminer l'erreur commise par l'utilisation de ce méta-modèle en lieu et place du modèle original. La méthodologie proposée est appliquée au dimensionnement des coques géométriquement imparfaites soumises au flambement.
Style APA, Harvard, Vancouver, ISO itp.
25

Pérez, Forero Fernando José. "Essays in structural macroeconometrics". Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/119323.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with the structural estimation of macroeconomic models via Bayesian methods and the economic implications derived from its empirical output. The first chapter provides a general method for estimating structural VAR models. The second chapter applies the method previously developed and provides a measure of the monetary stance of the Federal Reserve for the last forty years. It uses a pool of instruments and taking into account recent practices named Unconventional Monetary Policies. Then it is shown how the monetary transmission mechanism has changed over time, focusing the attention in the period after the Great Recession. The third chapter develops a model of exchange rate determination with dispersed information and regime switches. It has the purpose of fitting the observed disagreement in survey data of Japan. The model does a good job in terms of fitting the observed data.
Esta tesis trata sobre la estimación estructural de modelos macroeconómicos a través de métodos Bayesianos y las implicancias económicas derivadas de sus resultados. El primer capítulo proporciona un método general para la estimación de modelos VAR estructurales. El segundo capítulo aplica dicho método y proporciona una medida de la posición de política monetaria de la Reserva Federal para los últimos cuarenta años. Se utiliza una variedad de instrumentos y se tienen en cuenta las prácticas recientes denominadas políticas no convencionales. Se muestra cómo el mecanismo de transmisión de la política monetaria ha cambiado a través del tiempo, centrando la atención en el período posterior a la gran recesión. El tercer capítulo desarrolla un modelo de determinación del tipo de cambio con información dispersa y cambios de régimen, y tiene el propósito de capturar la dispersión observada en datos de encuestas de expectativas de Japón. El modelo realiza un buen trabajo en términos de ajuste de los datos.
Style APA, Harvard, Vancouver, ISO itp.
26

Chen, Yen-An, i 陳彥安. "Importance Sampling for Estimating High Dimensional Joint Default Probabilities". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/hfqq5e.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
應用數學科學研究所
105
We discuss the simulation method for estimating the probabilities of rare events, especially the “joint default” probabilities under different models. The methods we provide are based on importance sampling, which is a variance re- duction technique used to increase the number of samples reaching the “rare” region. There are two parts in this thesis. For the first part, we provide several importance sampling schemes, where the “asymptotically optimal” (or efficient) property is proved by applying the large deviation theory. Also, the results could be applied to the multi-dimensional scene, where most of the probabilities interested do not have an explicit expression. For the second part, we apply our idea to the model that is more complicated. It is shown that we have some satisfying results while keeping the computation efficient.
Style APA, Harvard, Vancouver, ISO itp.
27

Sharma, Neeraj Kumar. "Information-rich Sampling of Time-varying Signals". Thesis, 2018. https://etd.iisc.ac.in/handle/2005/4126.

Pełny tekst źródła
Streszczenie:
Confrontation with signal non-stationarity is a rule rather than an exception in the analysis of natural signals, such as speech, animal vocalization, music, bio-medical, atmospheric, ans seismic signals. Interestingly, our auditory system analyzes signal non-stationarity to trigger our perception. It does this with a performance which is unparalleled when compared to any man-made sound analyzer. Non-stationary signal analysis is a fairly challenging problem in the expanse of signal processing. Conventional approaches to analyze non-stationary signals are based on short-time quasi- stationary assumptions. Typically, short-time signal segments are analyzed using one of several transforms, such as Fourier, chirplets, and wavelets, with a predefined basis. However, the quasi-stationary assumption is known to be a serious limitation in recognizing fine temporal and spectral variations in natural signals. An accurate analysis of embedded variations can provide for more insightful understanding of natural signals. Motivated from the sensory mechanisms associated with the peripheral auditory system, this thesis proposes an alternate approach to analyze non-stationary signals. The approach builds on the intuition (and findings from auditory neuroscience literature) that a sequence of zero-crossings (ZCs) of a sine -wave provides its frequency information. Building over this, we hypothesize that sampling an arbitrary signal at some signal specific time instants, instead of uniform Nyquist-rate sampling, can obtain a compact and informative dataset for representation of the signal. The information-richness of the dataset can be quantified by the accuracy to characterize the time-varying attributes of the signal using the sample dataset. We systematically analyze this hypothesis for synthetic signals modeled by time-varying sinusoids and their additive mixtures. A restricted but rich class of non-stationary signals can be modeled using time-varying sinusoids. These sinusoids are characterized by their instantaneous-amplitude (IA) and instantaneous -frequency (IF) variations. It is shown that using ZCs of the signal and its higher-order derivatives, referred to as higher-order ZCs (HoZCs), we can obtain an accurate estimate of IA and IF variations of the sinusoids contained in the signal. The estimation is verified on synthetic signals and natural signal recordings of vocals and birdsong. On comparison of the approach with empirical mode decomposition, a popular technique for non-stationary signal analysis, and we show that the proposed approach has both improved precision and resolution. Building on the above finding on information-richness in the HoZCs instant, we evaluate signal reconstruction using this dataset. The sampling density of this dataset is time-varying in a manner adapting to the temporally evolving spectral content of the signal. Reconstruction is evaluated for speech and audio signals. It is found that for the same number of captured samples, HoZCs corresponding to the first derivative of the signal (extrema samples) provide maximum information compared to other derivatives. This is found to be true even in a comparison of signals reconstructed from an equal number of randomly sampled measurements. Based on these ideas we develop an analysis-modification-synthesis technique for a purely non-stationary modeling of speech signals. This is unlike the existing quasi-stationary analysis techniques. Instead, we propose to model the time- varying quasi -harmonic nature of speech signals. The proposed technique is not constrained by signal duration which helps to avoid blocking artifacts, and at the same time also provides fine temporal resolution of the time-varying attributes. The objective and subjective evaluations show that the technique has better naturalness post modification. It allows controlled modification of speech signals, and can provide for synthesizing novel speech stimuli for probing perception.
Style APA, Harvard, Vancouver, ISO itp.
28

Wang, Haiou. "Logic sampling, likelihood weighting and AIS-BN : an exploration of importance sampling". Thesis, 2001. http://hdl.handle.net/1957/28769.

Pełny tekst źródła
Streszczenie:
Logic Sampling, Likelihood Weighting and AIS-BN are three variants of stochastic sampling, one class of approximate inference for Bayesian networks. We summarize the ideas underlying each algorithm and the relationship among them. The results from a set of empirical experiments comparing Logic Sampling, Likelihood Weighting and AIS-BN are presented. We also test the impact of each of the proposed heuristics and learning method separately and in combination in order to give a deeper look into AIS-BN, and see how the heuristics and learning method contribute to the power of the algorithm. Key words: belief network, probability inference, Logic Sampling, Likelihood Weighting, Importance Sampling, Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks(AIS-BN), Mean Percentage Error (MPE), Mean Square Error (MSE), Convergence Rate, heuristic, learning method.
Graduation date: 2002
Style APA, Harvard, Vancouver, ISO itp.
29

"Imprecise Prior for Imprecise Inference on Poisson Sampling Model". Thesis, 2014. http://hdl.handle.net/10388/ETD-2014-04-1495.

Pełny tekst źródła
Streszczenie:
Prevalence is a valuable epidemiological measure about the burden of disease in a community for planning health services; however, true prevalence is typically underestimated and there exists no reliable method of confirming the estimate of this prevalence in question. This thesis studies imprecise priors for the development of a statistical reasoning framework regarding this epidemiological decision making problem. The concept of imprecise probabilities introduced by Walley (1991) is adopted for the construction of this inferential framework in order to model prior ignorance and quantify the degree of imprecision associated with the inferential process. The study is restricted to the standard and zero-truncated Poisson sampling models that give an exponential family with a canonical log-link function because of the mechanism involved with the estimation of population size. A three-parameter exponential family of posteriors which includes the normal and log-gamma as limiting cases is introduced by applying normal priors on the canonical parameter of the Poisson sampling models. The canonical parameters simplify dealing with families of priors as Bayesian updating corresponds to a translation of the family in the canonical hyperparameter space. The canonical link function creates a linear relationship between regression coefficients of explanatory variables and the canonical parameters of the sampling distribution. Thus, normal priors on the regression coefficients induce normal priors on the canonical parameters leading to a higher-dimensional exponential family of posteriors whose limiting cases are again normal or log-gamma. All of these implementations are synthesized to build the ipeglim package (Lee, 2013) that provides a convenient method for characterizing imprecise probabilities and visualizing their translation, soft-linearity, and focusing behaviours. A characterization strategy for imprecise priors is introduced for instances when there exists a state of complete ignorance. The learning process of an individual intentional unit, the agreement process between several intentional units, and situations concerning prior-data conflict are graphically illustrated. Finally, the methodology is applied for re-analyzing the data collected from the epidemiological disease surveillance of three specific cases – Cholera epidemic (Dahiya, 1973), Down’s syndrome (Zelterman, 1988), and the female users of methamphetamine and heroin (B ̈ ohning, 2009).
Style APA, Harvard, Vancouver, ISO itp.
30

Chen, Jen-Hao, i 陳人豪. "Panoramic Mosaicing with Iterative Registration and Varying Sampling Rate". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/88771161014257576984.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
資訊工程系
89
In the information age, the interplay of multimedia and Internet is becoming more and more important. Panoramic image mosaicing is a fundamental technique in digital image-based rendering applications. This thesis presents an approach to facilitating the seamless stitching of a sequence of overlapping images taken with a panning camera, hold by hands, whose lens center remains relative still, but focal length varying. We apply a three-phase projective transformation process to register the sequence of images in a common coordinate system. And we perform the intensity blending to remove the possible uneven illumination conditions in the different images. The above three-phase registration process iteratively eliminates the ghost or gap phenomenon in the final mosaicing output. Finally, we can check the value of the camera focal length to decide the sampling rate for warping the image projection rays to a cylinder or a sphere. In this way, we can produce a good image mosaicing result with a reasonable size of memory storage.
Style APA, Harvard, Vancouver, ISO itp.
31

Yeh, Mei-Yin, i 葉美銀. "An Application of Pseudo Likelihood Ratio Using Varying Probability Sampling". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/21544400131264525564.

Pełny tekst źródła
Streszczenie:
碩士
淡江大學
數學學系
92
In survey sampling, sample mean is commonly used to construct confidence intervals of the finite population mean. However, when the finite population contains a large proportion of zeroes, the normal approximation may have very poor coverage rate even when the sample size is very large. Kvanli, Shen and Deng(1998) proposed a parametric likelihood approach to construct confidence intervals. They found that the new intervals have more precise coverage. If the finite population does not fit the assumed parametric model, their method may not work as nicely. So Chen, Chen and Rao(2003) used non-parametric method to develop empirical likelihood ratio intervals. However, when the finite population contains a large proportion of zeroes, we may have selected all zeroes in one sample. To avoid such situations, we find auxiliary information which have a correlation coefficient ρ with the target population. We first sort this auxiliary information from the smallest to the largest and divide them equally into four groups, then draw a sample according to the ratios of 10%, 20%, 30%, 40%. We use two different weights to calculate the likelihood functions and develop pseudo parametric and empirical likelihood ratio intervals. We discuss their relation ship with respect to correlation coefficient ρ and nonzero proportion p , and also analyze their lower and upper average bounds and coverage rates .
Style APA, Harvard, Vancouver, ISO itp.
32

Chang, Mei-Yun, i 張美雲. "A Robust Varying Sampling Rate Observer Design Based On mu -Synthesis". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/85129729873210139961.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
機械工程學研究所
90
Systems with a variable sampling rate are wildly used in industry, such as brushless DC motor and CD-ROM. State information (position and velocity ) is embedded along the trajectory and obtains through sensors( hall sensors or encoders ). In this case the measurement is available only when the system comes across a sensor mark. The fact that the sampling period is unknown made it hard to draw conclusions on the system behavior. In this note, we separate the system uncertainty based on mu-Synthesis. Then, design an observer with robust stability and robust performance to make the error between observed speed and real speed reach our demand.
Style APA, Harvard, Vancouver, ISO itp.
33

Cumpston, John Richard. "New techniques for household microsimulation, and their application to Australia". Phd thesis, 2011. http://hdl.handle.net/1885/9046.

Pełny tekst źródła
Streszczenie:
Household microsimulation models are sometimes used by national governments to make long-term projections of proposed policy changes. They are costly to develop and maintain, and sometimes have short lifetimes. Most present national models have limited interactions between agents, few regions and long simulation cycles. Some models are very slow to run. Overcoming these limitations may open up a much wider range of government, business and individual uses. This thesis suggests techniques to help make multi-purpose dynamic microsimulations of households, with fine spatial resolutions, high sampling densities and short simulation cycles. Techniques suggested are: * simulation by sampling with loaded probabilities * proportional event alignment * event alignment using random sampling * immediate matching by probability-weighting * immediate 'best of n' matching. All of these techniques are tested in artificial situations. Three of them - sampling with loaded probabilities, alignment using random sampling and best of n matching - are successfully tested in the Cumpston model, a household microsimulation model developed for this thesis. Sampling with loaded probabilities is found to give almost identical results to the traditional all-case sampling, but be quicker. The suggested alignment and matching techniques are shown to give less distortion and generally lower runtimes than some techniques currently in use. The Cumpston model is based on a 1% sample from the 2001 Australian census. Individuals, families, households and dwellings are included. Immigration and emigration are separately simulated, together with internal migration between 57 statistical divisions. Transitions between 8 person types are simulated, and between 9 occupations. The model projects education, employment, earnings and retirement savings for each individual, and dwelling values, rents and housing loans for each household. The onset and development of diseases for each individual are simulated. Validation of the model was based on methods used by the Orcutt, CORSIM, DYNACAN and APPSIM models. Iterative methods for model calibration are described, together with a statistical test for creep in multiple runs. The model takes about 85 seconds to make projections for 50 years with yearly simulation cycles. After standardizing for sample size and projection years, this is a little slower than the fastest national models currently operating. A planned extension of the model is to 2.2 million persons over 2,214 areas, synthesized from 2011 census tabulations. Using multithreading where feasible, a 50-year projection may take about 10 minutes.
Style APA, Harvard, Vancouver, ISO itp.
34

McNamee, Jeff. "Accuracy of momentary time sampling a comparison of varying interval lengths using SOFIT /". 2004. http://www.oregonpdf.org.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Oregon State University, 2004.
Includes bibliographical references (leaves 41-49). Also available online (PDF file) by a subscription to the set or by purchasing the individual file.
Style APA, Harvard, Vancouver, ISO itp.
35

McNamee, Jeff B. "Accuracy of momentary time sampling : a comparison of varying interval lengths using SOFIT". Thesis, 2003. http://hdl.handle.net/1957/30142.

Pełny tekst źródła
Streszczenie:
The U.S. Department of Health and Human Services has made the promotion of regular physical activity a national health objective, and experts believe that physical education can play a significant role in the promotion of physical activity. Feasible measurement tools to assess physical activity behavior, by physical educators, are lacking. One validated instrument is the System for Observing Fitness Instruction Time (SOFIT; McKenzie, Sallis & Nader, 1991). SOFIT's physical activity data are collected using momentary time sampling (MTS) with a 20-second interval length and provide estimates of Moderate to Vigorous Physical Activity (MVPA). Whether variations in interval lengths would adversely affect the accuracy of the MVPA data has not been investigated. From a clinical perspective, if physical education teachers are to utilize MTS procedures for on-going assessment they will require longer time intervals to collect accurate MVPA data. Therefore, this project sought to determine the accuracy of MVPA levels collected through varying observation tactics (i.e., 20s, 60s, 90s, 120s, 180s, and random) relative to those collected through duration recording (DR). Video records of 30 randomly selected elementary school physical education classes were utilized for this study. Utilizing modified physical activity codes from SOFIT, the researchers collected MTS data regarding students' MVPA at varying interval lengths (i.e., 20s, 60, 90s, 120s, 180s, and random). Three statistical techniques, Pearson-product moment (PPM) correlation coefficients, Repeated Measures Analysis of Variance (RM ANOVA), and Average Error (AE), were utilized to demonstrate concurrent validity of the varying interval lengths. Results demonstrated moderate-low to high correlations between the 20s, 60s, 90s, and random interval lengths and the DR tactic during the total class. The RM ANOVA indicated similarity between all the varying interval lengths and the DR tactic for total class observation. The MTS procedure that created the least amount of AE across classes was the 20s variable followed by the 60s, random, and 90s variables. These findings build empirical evidence for the use of a 60s, random, and 90s MTS procedure for the purpose of MVPA assessment by physical educators.
Graduation date: 2004
Style APA, Harvard, Vancouver, ISO itp.
36

Chou, Wan-Lin, i 周宛霖. "A Study on Negative Policy Rates and Exchange Rates: Application of Markov Regime Switching Model with Time Varying Transition Probabilities". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/242h93.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
經濟學研究所
106
This study focuses on the impact of negative policy rates on the exchange rates. I use the data of six countries, including Denmark, Sweden, Switzerland, Japan, the euro area, and the United States. The study is based on Taylor''s Rule model and Markov Regime Switching with Time Varying Transition Probabilities model(MS-TVTP), and using Matlab to analyze the influences of negative interest rate policy on the exchange rate in Denmark, Sweden, Switzerland and Japan. In order to understand the effect of the negative interest rate policy. The result shows that the negative policy rate in Denmark and Sweden has an effect on the depreciation of the currency, while Switzerland and Japan do not have the same effect. On the other hand, the result shows the appreciation situation in Switzerland and Japan. In addition, the study shows that the Markov Regime Switching with Time Varying Transition Probabilities model has pretty good analysis ability of the depreciation or appreciation of the currency. When going into the negative interest rate situation, it can be found a substantial increase or decrease in exchange rate, and it can also be found that significant regime switching in the movements of the currency. So we can use this model to estimate the impact on the change of the exchange rate.
Style APA, Harvard, Vancouver, ISO itp.
37

Ben, Issaid Chaouki. "Effcient Monte Carlo Simulations for the Estimation of Rare Events Probabilities in Wireless Communication Systems". Diss., 2019. http://hdl.handle.net/10754/660001.

Pełny tekst źródła
Streszczenie:
Simulation methods are used when closed-form solutions do not exist. An interesting simulation method that has been widely used in many scientific fields is the Monte Carlo method. Not only it is a simple technique that enables to estimate the quantity of interest, but it can also provide relevant information about the value to be estimated through its confidence interval. However, the use of classical Monte Carlo method is not a reasonable choice when dealing with rare event probabilities. In fact, very small probabilities require a huge number of simulation runs, and thus, the computational time of the simulation increases significantly. This observation lies behind the main motivation of the present work. In this thesis, we propose efficient importance sampling estimators to evaluate rare events probabilities. In the first part of the thesis, we consider a variety of turbulence regimes, and we study the outage probability of free-space optics communication systems under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach,based on the exponential twisting technique to offer fast and accurate results. We also show that our approach extends to the multihop scenario. In the second part of the thesis, we are interested in assessing the outage probability achieved by some diversity techniques over generalized fading channels. In many circumstances, this is related to the difficult question of analyzing the statistics of the sum of random variables. More specifically, we propose robust importance sampling schemes that efficiently evaluate the outage probability of diversity receivers over Gamma-Gamma, α − µ, κ − µ, and η − µ fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations in both case studies. In the last part of this thesis, we propose efficient importance sampling estimators for the left tail of positive Gaussian quadratic forms in both real and complex settings. We show that these estimators possess the bounded relative error property. These estimators are then used to estimate the outage probability of maximum ratio combining diversity receivers over correlated Nakagami-m or correlated Rician fading channels
Style APA, Harvard, Vancouver, ISO itp.
38

Kassanjee, Reshma. "Characterisation and application of tests for recent infection for HIV incidence surveillance". Thesis, 2015. http://hdl.handle.net/10539/16837.

Pełny tekst źródła
Streszczenie:
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. 21 October, 2014.
Three decades ago, the discovery of the Human Immunodeficiency Virus (HIV) was announced. The subsequent HIV pandemic has continued to devastate the global community, and many countries have set ambitious HIV reduction targets over the years. Reliable methods for measuring incidence, the rate of new infections, are essential for monitoring the virus, allocating resources, and assessing interventions. The estimation of incidence from single cross-sectional surveys using tests that distinguish between ‘recent’ and ‘non-recent’ infection has therefore attracted much interest. The approach provides a promising alternative to traditional estimation methods which often require more complex survey designs, rely on poorly known inputs, and are prone to bias. More specifically, the prevalence of HIV and ‘recent’ HIV infection, as measured in a survey, are used together with relevant test properties to infer incidence. However, there has been a lack of methodological consensus in the field, caused by limited applicability of proposed estimators, inconsistent test characterisation (or estimation of test properties) and uncertain test performance. This work aims to address these key obstacles. A general theoretical framework for incidence estimation is developed, relaxing unrealistic assumptions used in earlier estimators. Completely general definitions of the required test properties emerge from the analysis. The characterisation of tests is then explored: a new approach, that utilises specimens from subjects observed only once after infection, is demonstrated; and currently-used approaches, that require that subjects are followed-up over time after infection, are systematically benchmarked. The first independent and consistent characterisation of multiple candidate tests is presented, and was performed on behalf of the Consortium for the Evaluation and Performance of HIV Incidence Assays (CEPHIA), which was established to provide guidance and foster consensus in the field. Finally, the precision of the incidence estimator is presented as an appropriate metric for evaluating, optimising and comparing tests, and the framework serves to counter existing misconceptions about test performance. The contributions together provide sound theoretical and methodological foundations for the application, characterisation and optimisation of recent infection tests for HIV incidence surveillance, allowing the focus to now shift towards practical application.
Style APA, Harvard, Vancouver, ISO itp.
39

Yang, Jiaxi. "Sequential Rerandomization in the Context of Small Samples". Thesis, 2021. https://doi.org/10.7916/d8-80h5-zk34.

Pełny tekst źródła
Streszczenie:
Rerandomization (Morgan & Rubin, 2012) is designed for the elimination of covariate imbalance at the design stage of causal inference studies. By improving the covariate balance, rerandomization helps provide more precise and trustworthy estimates (i.e., lower variance) of the average treatment effect (ATE). However, there are only a limited number of studies considering rerandomization strategies or discussing the covariate balance criteria that are observed before conducting the rerandomization procedure. In addition, researchers may find more difficulty in ensuring covariate balance across groups with small-sized samples. Furthermore, researchers conducting experimental design studies in psychology and education fields may not be able to gather data from all subjects simultaneously. Subjects may not arrive at the same time and experiments can hardly wait until the recruitment of all subjects. As a result, we have presented the following research questions: 1) How does the rerandomization procedure perform when the sample size is small? 2) Are there any other balancing criteria that may work better than the Mahalanobis distance in the context of small samples? 3) How well does the balancing criterion work in a sequential rerandomization design? Based on the Early Childhood Longitudinal Study, Kindergarten Class, a Monte-Carlo simulation study is presented for finding a better covariate balance criterion with respect to small samples. In this study, the neural network predicting model is used to calculate missing counterfactuals. Then, to ensure covariate balance in the context of small samples, the rerandomization procedure uses various criteria measuring covariate balance to find the specific criterion for the most precise estimate of sample average treatment effect. Lastly, a relatively good covariate balance criterion is adapted to Zhou et al.’s (2018) sequential rerandomization procedure and we examined its performance. In this dissertation, we aim to identify the best covariate balance criterion using the rerandomization procedure to determine the most appropriate randomized assignment with respect to small samples. On the use of Bayesian logistic regression with Cauchy prior as the covariate balance criterion, there is a 19% decrease in the root mean square error (RMSE) of the estimated sample average treatment effect compared to pure randomization procedures. Additionally, it is proved to work effectively in sequential rerandomization, thus making a meaningful contribution to the studies of psychology and education. It further enhances the power of hypothesis testing in randomized experimental designs.
Style APA, Harvard, Vancouver, ISO itp.
40

Robinson, Joshua Westly. "Modeling Time-Varying Networks with Applications to Neural Flow and Genetic Regulation". Diss., 2010. http://hdl.handle.net/10161/3109.

Pełny tekst źródła
Streszczenie:

Many biological processes are effectively modeled as networks, but a frequent assumption is that these networks do not change during data collection. However, that assumption does not hold for many phenomena, such as neural growth during learning or changes in genetic regulation during cell differentiation. Approaches are needed that explicitly model networks as they change in time and that characterize the nature of those changes.

In this work, we develop a new class of graphical models in which the conditional dependence structure of the underlying data-generation process is permitted to change over time. We first present the model, explain how to derive it from Bayesian networks, and develop an efficient MCMC sampling algorithm that easily generalizes under varying levels of uncertainty about the data generation process. We then characterize the nature of evolving networks in several biological datasets.

We initially focus on learning how neural information flow networks change in songbirds with implanted electrodes. We characterize how they change in response to different sound stimuli and during the process of habituation. We continue to explore the neurobiology of songbirds by identifying changes in neural information flow in another habituation experiment using fMRI data. Finally, we briefly examine evolving genetic regulatory networks involved in Drosophila muscle differentiation during development.

We conclude by suggesting new experimental directions and statistical extensions to the model for predicting novel neural flow results.


Dissertation
Style APA, Harvard, Vancouver, ISO itp.
41

"Exploring the Impact of Varying Levels of Augmented Reality to Teach Probability and Sampling with a Mobile Device". Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.20874.

Pełny tekst źródła
Streszczenie:
abstract: Statistics is taught at every level of education, yet teachers often have to assume their students have no knowledge of statistics and start from scratch each time they set out to teach statistics. The motivation for this experimental study comes from interest in exploring educational applications of augmented reality (AR) delivered via mobile technology that could potentially provide rich, contextualized learning for understanding concepts related to statistics education. This study examined the effects of AR experiences for learning basic statistical concepts. Using a 3 x 2 research design, this study compared learning gains of 252 undergraduate and graduate students from a pre- and posttest given before and after interacting with one of three types of augmented reality experiences, a high AR experience (interacting with three dimensional images coupled with movement through a physical space), a low AR experience (interacting with three dimensional images without movement), or no AR experience (two dimensional images without movement). Two levels of collaboration (pairs and no pairs) were also included. Additionally, student perceptions toward collaboration opportunities and engagement were compared across the six treatment conditions. Other demographic information collected included the students' previous statistics experience, as well as their comfort level in using mobile devices. The moderating variables included prior knowledge (high, average, and low) as measured by the student's pretest score. Taking into account prior knowledge, students with low prior knowledge assigned to either high or low AR experience had statistically significant higher learning gains than those assigned to a no AR experience. On the other hand, the results showed no statistical significance between students assigned to work individually versus in pairs. Students assigned to both high and low AR experience perceived a statistically significant higher level of engagement than their no AR counterparts. Students with low prior knowledge benefited the most from the high AR condition in learning gains. Overall, the AR application did well for providing a hands-on experience working with statistical data. Further research on AR and its relationship to spatial cognition, situated learning, high order skill development, performance support, and other classroom applications for learning is still needed.
Dissertation/Thesis
Ph.D. Educational Technology 2013
Style APA, Harvard, Vancouver, ISO itp.
42

Ghile, Yonas Beyene. "Development of a framework for an integrated time-varying agrohydrological forecast system for southern Africa". Thesis, 2007. http://hdl.handle.net/10413/352.

Pełny tekst źródła
Streszczenie:
Policy makers, water managers, farmers and many other sectors of the society in southern Africa are confronting increasingly complex decisions as a result of the marked day-to-day, intra-seasonal and inter-annual variability of climate. Hence, forecasts of hydro-climatic variables with lead times of days to seasons ahead are becoming increasingly important to them in making more informed risk-based management decisions. With improved representations of atmospheric processes and advances in computer technology, a major improvement has been made by institutions such as the South African Weather Service, the University of Pretoria and the University of Cape Town in forecasting southern Africa’s weather at short lead times and its various climatic statistics for longer time ranges. In spite of these improvements, the operational utility of weather and climate forecasts, especially in agricultural and water management decision making, is still limited. This is so mainly because of a lack of reliability in their accuracy and the fact that they are not suited directly to the requirements of agrohydrological models with respect to their spatial and temporal scales and formats. As a result, the need has arisen to develop a GIS based framework in which the “translation” of weather and climate forecasts into more tangible agrohydrological forecasts such as streamflows, reservoir levels or crop yields is facilitated for enhanced economic, environmental and societal decision making over southern Africa in general, and in selected catchments in particular. This study focuses on the development of such a framework. As a precursor to describing and evaluating this framework, however, one important objective was to review the potential impacts of climate variability on water resources and agriculture, as well as assessing current approaches to managing climate variability and minimising risks from a hydrological perspective. With the aim of understanding the broad range of forecasting systems, the review was extended to the current state of hydro-climatic forecasting techniques and their potential applications in order to reduce vulnerability in the management of water resources and agricultural systems. This was followed by a brief review of some challenges and approaches to maximising benefits from these hydro-climatic forecasts. A GIS based framework has been developed to serve as an aid to process all the computations required to translate near real time rainfall fields estimated by remotely sensed tools, as well as daily rainfall forecasts with a range of lead times provided by Numerical Weather Prediction (NWP) models into daily quantitative values which are suitable for application with hydrological or crop models. Another major component of the framework was the development of two methodologies, viz. the Historical Sequence Method and the Ensemble Re-ordering Based Method for the translation of a triplet of categorical monthly and seasonal rainfall forecasts (i.e. Above, Near and Below Normal) into daily quantitative values, as such a triplet of probabilities cannot be applied in its original published form into hydrological/crop models which operate on a daily time step. The outputs of various near real time observations, of weather and climate models, as well as of downscaling methodologies were evaluated against observations in the Mgeni catchment in KwaZulu-Natal, South Africa, both in terms of rainfall characteristics as well as of streamflows simulated with the daily time step ACRU model. A comparative study of rainfall derived from daily reporting raingauges, ground based radars, satellites and merged fields indicated that the raingauge and merged rainfall fields displayed relatively realistic results and they may be used to simulate the “now state” of a catchment at the beginning of a forecast period. The performance of three NWP models, viz. the C-CAM, UM and NCEP-MRF, were found to vary from one event to another. However, the C-CAM model showed a general tendency of under-estimation whereas the UM and NCEP-MRF models suffered from significant over-estimation of the summer rainfall over the Mgeni catchment. Ensembles of simulated streamflows with the ACRU model using ensembles of rainfalls derived from both the Historical Sequence Method and the Ensemble Re-ordering Based Method showed reasonably good results for most of the selected months and seasons for which they were tested, which indicates that the two methods of transforming categorical seasonal forecasts into ensembles of daily quantitative rainfall values are useful for various agrohydrological applications in South Africa and possibly elsewhere. The use of the Ensemble Re-ordering Based Method was also found to be quite effective in generating the transitional probabilities of rain days and dry days as well as the persistence of dry and wet spells within forecast cycles, all of which are important in the evaluation and forecasting of streamflows and crop yields, as well as droughts and floods. Finally, future areas of research which could facilitate the practical implementation of the framework were identified.
Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2007.
Style APA, Harvard, Vancouver, ISO itp.
43

Hao, Yufang. "Generalizing sampling theory for time-varying Nyquist rates using self-adjoint extensions of symmetric operators with deficiency indices (1,1) in Hilbert spaces". Thesis, 2011. http://hdl.handle.net/10012/6311.

Pełny tekst źródła
Streszczenie:
Sampling theory studies the equivalence between continuous and discrete representations of information. This equivalence is ubiquitously used in communication engineering and signal processing. For example, it allows engineers to store continuous signals as discrete data on digital media. The classical sampling theorem, also known as the theorem of Whittaker-Shannon-Kotel'nikov, enables one to perfectly and stably reconstruct continuous signals with a constant bandwidth from their discrete samples at a constant Nyquist rate. The Nyquist rate depends on the bandwidth of the signals, namely, the frequency upper bound. Intuitively, a signal's `information density' and `effective bandwidth' should vary in time. Adjusting the sampling rate accordingly should improve the sampling efficiency and information storage. While this old idea has been pursued in numerous publications, fundamental problems have remained: How can a reliable concept of time-varying bandwidth been defined? How can samples taken at a time-varying Nyquist rate lead to perfect and stable reconstruction of the continuous signals? This thesis develops a new non-Fourier generalized sampling theory which takes samples only as often as necessary at a time-varying Nyquist rate and maintains the ability to perfectly reconstruct the signals. The resulting Nyquist rate is the critical sampling rate below which there is insufficient information to reconstruct the signal and above which there is redundancy in the stored samples. It is also optimal for the stability of reconstruction. To this end, following work by A. Kempf, the sampling points at a Nyquist rate are identified as the eigenvalues of self-adjoint extensions of a simple symmetric operator with deficiency indices (1,1). The thesis then develops and in a sense completes this theory. In particular, the thesis introduces and studies filtering, and yields key results on the stability and optimality of this new method. While these new results should greatly help in making time-variable sampling methods applicable in practice, the thesis also presents a range of new purely mathematical results. For example, the thesis presents new results that show how to explicitly calculate the eigenvalues of the complete set of self-adjoint extensions of such a symmetric operator in the Hilbert space. This result is of interest in the field of functional analysis where it advances von Neumann's theory of self-adjoint extensions.
Style APA, Harvard, Vancouver, ISO itp.
44

Windle, Jesse Bennett. "Forecasting high-dimensional, time-varying variance-covariance matrices with high-frequency data and sampling Pólya-Gamma random variates for posterior distributions derived from logistic likelihoods". 2013. http://hdl.handle.net/2152/21842.

Pełny tekst źródła
Streszczenie:
The first portion of this thesis develops efficient samplers for the Pólya-Gamma distribution, an essential component of the eponymous data augmentation technique that can be used to simulate posterior distributions derived from logistic likelihoods. Building fast computational schemes for such models is important due to their broad use across a range of disciplines, including economics, political science, epidemiology, ecology, psychology, and neuroscience. The second portion of this thesis explores models of time-varying covariance matrices for financial time series. Covariance matrices describe the dynamics of risk and the ability to forecast future variance and covariance has a direct impact on the investment decisions made by individuals, banks, funds, and governments. Two options are pursued. The first incorporates information from high-frequency statistics into factor stochastic volatility models while the second models high-frequency statistics directly. The performance of each is assessed based upon its ability to hedge risk within a class of similarly risky assets.
text
Style APA, Harvard, Vancouver, ISO itp.
45

Lefebvre, Isabelle. "Estimation simplifiée de la variance pour des plans complexes". Thèse, 2016. http://hdl.handle.net/1866/19112.

Pełny tekst źródła
Streszczenie:
En présence de plans de sondage complexes, les méthodes classiques d’estimation de la variance présentent certains défis. En effet, les estimateurs de variance usuels requièrent les probabilités d’inclusion d’ordre deux qui peuvent être complexes à obtenir pour certains plans de sondage. De plus, pour des raisons de confidentialité, les fichiers externes de microdonnées n’incluent généralement pas les probabilités d’inclusion d’ordre deux (souvent sous la forme de poids bootstrap). En s’inspirant d’une approche développée par Ohlsson (1998) dans le contexte de l’échantillonnage de Poisson séquentiel, nous proposons un estimateur ne requérant que les probabilités d’inclusion d’ordre un. L’idée est d’approximer la stratégie utilisée par l’enquête (consistant du choix d’un plan de sondage et d’un estimateur) par une stratégie équivalente dont le plan de sondage est le plan de Poisson. Nous discuterons des plans proportionnels à la taille avec ou sans grappes. Les résultats d’une étude par simulation seront présentés.
In a complex design framework, standard variance estimation methods entail substantial challenges. As we know, conventional variance estimators involve second order inclusion probabilities, which can be difficult to compute for some sampling designs. Also, confidentiality standards generally prevent second order inclusion probabilities to be included in external microdata files (often in the form of bootstrap weights). Based on Ohlsson’s sequential Poisson sampling method (1998), we suggest a simplified estimator for which we only need first order inclusion probabilities. The idea is to approximate a survey strategy (which consists of a sampling design and an estimator) by an equivalent strategy for which a Poisson sampling design is used. We will discuss proportional to size sampling and proportional to size cluster sampling. Results of a simulation study will be presented.
Style APA, Harvard, Vancouver, ISO itp.
46

Vallée, Audrey-Anne. "Estimation de la variance en présence de données imputées pour des plans de sondage à grande entropie". Thèse, 2014. http://hdl.handle.net/1866/11120.

Pełny tekst źródła
Streszczenie:
Les travaux portent sur l’estimation de la variance dans le cas d’une non- réponse partielle traitée par une procédure d’imputation. Traiter les valeurs imputées comme si elles avaient été observées peut mener à une sous-estimation substantielle de la variance des estimateurs ponctuels. Les estimateurs de variance usuels reposent sur la disponibilité des probabilités d’inclusion d’ordre deux, qui sont parfois difficiles (voire impossibles) à calculer. Nous proposons d’examiner les propriétés d’estimateurs de variance obtenus au moyen d’approximations des probabilités d’inclusion d’ordre deux. Ces approximations s’expriment comme une fonction des probabilités d’inclusion d’ordre un et sont généralement valides pour des plans à grande entropie. Les résultats d’une étude de simulation, évaluant les propriétés des estimateurs de variance proposés en termes de biais et d’erreur quadratique moyenne, seront présentés.
Variance estimation in the case of item nonresponse treated by imputation is the main topic of this work. Treating the imputed values as if they were observed may lead to substantial under-estimation of the variance of point estimators. Classical variance estimators rely on the availability of the second-order inclusion probabilities, which may be difficult (even impossible) to calculate. We propose to study the properties of variance estimators obtained by approximating the second-order inclusion probabilities. These approximations are expressed in terms of first-order inclusion probabilities and are usually valid for high entropy sampling designs. The results of a simulation study evaluating the properties of the proposed variance estimators in terms of bias and mean squared error will be presented.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii