Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Ratio estimators.

Rozprawy doktorskie na temat „Ratio estimators”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Ratio estimators”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Chen, Dandan. "Amended Estimators of Several Ratios for Categorical Data". Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2218.

Pełny tekst źródła
Streszczenie:
Point estimation of several association parameters in categorical data are presented. Typically, a constant is added to the frequency counts before the association measure is computed. We will study the accuracy of these adjusted point estimators based on frequentist and Bayesian methods respectively. In particular, amended estimators for the ratio of independent Poisson rates, relative risk, odds ratio, and the ratio of marginal binomial proportions will be examined in terms of bias and mean squared error.
Style APA, Harvard, Vancouver, ISO itp.
2

Ladak, Al-Karim Madatally. "Resampling-based variance estimators in ratio estimation with application to weigh scaling". Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29195.

Pełny tekst źródła
Streszczenie:
Weigh scaling is a method of estimating the total volume of timber harvested from a given region. The implementation of statistical sampling techniques in weigh scaling is described, along with related issues. A review of ratio estimators, along with variance estimators of the classical ratio estimator is conducted. The estimation of the variance of the estimated total volume is considered using jackknife- and bootstrap-based variance estimators. Weighted versions of the jackknife and bootstrap variance estimators are derived using influence functions and Fisher Information matrices. Empirical studies of analytic and resampling-based variance estimators are conducted, with particular emphasis on small sample properties and on robustness with respect to both the homoscedastic variance and zero-intercept population characteristics. With a squared error loss function, the resampling-based variance estimators are shown to perform very well at all sample sizes in finite populations with normally distributed errors. These estimators are found to have small negative biases for small sample sizes and to be robust with respect to heteroscedasticity.
Science, Faculty of
Statistics, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
3

Hattaway, James T. "Parameter Estimation and Hypothesis Testing for the Truncated Normal Distribution with Applications to Introductory Statistics Grades". Diss., CLICK HERE for online access, 2010. http://contentdm.lib.byu.edu/ETD/image/etd3412.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Mazzarella, Gianluca. "Combining Jump and Kink ratio estimators in Regression Discontinuity Designs, with an application to the causal effect of retirement on well-being". Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424749.

Pełny tekst źródła
Streszczenie:
Regression Discontinuity Design (RDD) is one of the most popular designs in the field of causal inference in nonexperimental settings. It is based on the idea that the treatment is (totally or partially) determined by a threshold point of an observed continuous variable. When the treatment is just partially determined by that variable, it is usu- ally defined fuzzy RDD. In this setting, given a certain outcome, the only effect that one is able to identify is the Average Treatment Effect (ATE) for the subpopulation of the Compliers at the threshold point. The ATE could be obtained by the ratio between the discontinuity at the threshold point in the average of the observed outcome divided by the discontinuity in the treatment probability. This thesis explores, from a methodological and empirical perspective, how the change of slope at the threshold point is informative for the estimate of the parameter of interest. Starting from the changes of the eligibility criterion for retirement that took place in Italy in the ’90s we propose an alternative estimator, based on Instrumental Variables, that is a combination of the discontinuity and the change of slope. Furthermore we provide a simulation study to compare the efficiency of the different estimators. Then we analyze the effects of retirement on the subjective well–being. Finally we generalize the results using the Two Sample Instrumental Variable estimator, in order to improve the efficiency of estimates based on administrative data and to con- struct delayed outcomes for the same cohorts.
Il Regression Discontinuity Design è una delle più diffuse tecniche nell'ambito dell’inferenza causale nei processi quasi-sperimentali. È basata sull'idea che l'esposizione ad un trattamento sia (parzialmente o totalmente) stabilita da un punto di soglia di una variabile continua e osservabile. Quando l'esposizione al trattamento è solo parzialmente stabilita da questa variabile, si è solito definirlo fuzzy Regression Discontinuity Design. In questo contesto, dato un determinato outcome di interesse, è possibile identificare soltanto l’effetto medio del trattamento per la sotto-popolazione dei Compliers. Tale effetto può essere ottenuto dal rapporto tra la discontinuità nel punto di soglia nella media dell'outcome divisa per la discontinuità nella probabilità di esposizione al trattamento. La tesi esamina, da un punto di vista metodologico e empirico, come possano essere informativi per la stima del parametro di interesse i cambiamenti di pendenza nel punto di soglia. Partendo dalle modifiche nei criteri di ammissibilità al pensionamento avvenuti in Italia a partire dagli anni '90, abbiamo proposto uno stimatore, basato sulla logica delle Variabili Strumentali, che è una combinazione della discontinuità e del cambiamento di pendenza. In seguito abbiamo proposto uno studio di simulazione per confrontare l'efficienza dei diversi stimatori. Successivamente abbiamo analizzato gli effetti del pensionamento sulla soddisfazione personale percepita. Infine abbiamo generalizzato, usando lo stimatore a Variabili Strumentali su Due Campioni per migliorare l'efficienza delle stime con dati amministrativi o per costruire outcome successivi al pensionamento.
Style APA, Harvard, Vancouver, ISO itp.
5

Hariharan, S. "Channel estimators for HF radio links". Thesis, Loughborough University, 1988. https://dspace.lboro.ac.uk/2134/6733.

Pełny tekst źródła
Streszczenie:
The thesis is concerned with the estimation of the sampled impulse-response (SIR), of a time-varying HF channel, where the estimators are used in the receiver of a 4800 bits/s, quaternary phase shift keyed (QPSK) system, operating at 2400 bauds with an 1800 Hz carrier. T= FIF modems employing maximum-likelihood detectors at the receiver require accurate knowledge of the SIR of the channel. With this objective in view, the thesis considers a number of channel estimation techniques, using an idealised model of the data transmission system. The thesis briefly describes the ionospheric propagation medium and the factors affecting the data transmission over BF radio. It then presents an equivalent baseband model of the I-IF channel, that has three separate Rayleigh fading paths (sky waves), with a 2Hz frequency spread and transmission delays of 0,1.1 and 3 milliseconds relative to the first sky wave. Estimation techniques studied are, the Gradient estimator, the Recursive leastsquares (RLS) Kalman estimator, the Adaptive channel estimators, the Efficient channel estimator ( that takes into account prior knowledge of the number of fading paths in the channel ), and the Fast Transversal Filter (F-FF), estimator (which is a simplified form of the Kalman estimator). Several new algorithms based on the above mentioned estimation techniques are also proposed. Results of the computer simulation tests on the performance of the estimators, over a typical worst channel, are then presented. The estimators are reasonably optimized to achieve the minimum mean-square estimation error and adequate allowance has been made for stabilization before the commencement of actual measurements. The results, therefore, represent the steady-state performance of the estimators. The most significant result, obtained in this study, is the performance of the Adaptive estimator. When the characteristics of the channel are known, the Efficient estimators have the best performance and the Gradient estimators the poorest. Kalman estimators are the most complex and Gradient estimators are the simplest. Kalman estimators have a performance rather similar to that of Gradient estimators. In terms of both performance and complexity, the Adaptive estimator lies between the Kalman and Efficient estimators. FTF estimators are known to exhibit numerical instability, for which an effective stabilization technique is proposed. Simulation tests have shown that the mean squared estimation error is an adequate measurement for comparison of the performance of the estimators.
Style APA, Harvard, Vancouver, ISO itp.
6

Gendron, Paul John. "A comparison of digital beacon receiver frequency estimators". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-09292009-020307/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sousa, Rita Cristina Pinto de. "Parameter estimation in the presence of auxiliary information". Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11295.

Pełny tekst źródła
Streszczenie:
Dissertação para obtenção do Grau de Doutora em Estatística e Gestão de Risco, Especialidade em Estatística
In survey research, there are many situations when the primary variable of interest is sensitive. The sensitivity of some queries can give rise to a refusal to answer or to false answers given intentionally. Survey can be conducted in a variety of settings, in part dictated by the mode of data collection, and these settings can differ in how much privacy they offer the respondent. The estimates obtained from a direct survey on sensitive questions would be subject to high bias. A variety of techniques have been used to improve reporting by increasing the privacy of the respondents. The Randomized Response Technique (RRT), introduced byWarner in 1965, develops a random relation between the individual’s response and the question. This technique provides confidentiality to respondents and still allows the interviewers to estimate the characteristic of interest at an aggregate level. In this thesis we propose some estimators to improve the mean estimation of a sensitive variable based on a RRT by making use of available non-sensitive auxiliary information. In the first part of this thesis we present the ratio and the regression estimators as well as some generalizations in order to study the gain in the estimation over the ordinary RRT mean estimator. In chapters 4 and 5 we study the performance of some exponential type estimators, also based on a RRT. The final part of the thesis illustrates an approach to mean estimation in stratified sampling. This study confirms some previous results for a different sample design. An extensive simulation study and an application to a real dataset are done for all the study estimators to evaluate their performance. In the last chapter we present a general discussion referring to the main results and conclusions as well as showing an application to a real dataset which compares the performance of study estimators.
Style APA, Harvard, Vancouver, ISO itp.
8

Winnett, Angela Susan. "Flexible estimators of hazard ratios for exploratory and residual analysis". Thesis, University College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312945.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Brownridge, Alyce Mahan. "Comparisons of lysimetric and Bowen ratio estimates of evapotranspiration". Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/191841.

Pełny tekst źródła
Streszczenie:
Two sets of Bowen ratio and lysimeter measurements of evapotranspiration (ET) were compared for a field of winter wheat in Phoenix, Arizona. Daytime data for ten days of clear skies were examined. Daily lysimeter ET (LET) generally exceeded Bowen ratio ET (BRET). Advective cases were compared with lapse cases. For one set of lysimeter and Bowen ratio measurements, average LET was 10% more than BRET during advective conditions, while average LET and BRET were equal during lapse conditions. Results for the other pair of measurements were less conclusive due to unresolved lysimeter problems, with average LET 13% more than BRET during advection, and average LET 13% less than BRET during lapse conditions. These results suggest that the assumption of equal eddy diffusivities for heat and vapor caused BRET to underestimate evapotranspiration during advection. The Bowen ratio, wind speed, and wind direction were identified as possible variables for correcting BRET underestimation.
Style APA, Harvard, Vancouver, ISO itp.
10

Manno, Michael S. "An evaluation of the odds ratio as an estimator of vaccine efficacy". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0001/MQ28747.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Bjerre, Lise M. (Lise Marie). "Analysis of etiologic studies : understanding the Mantel-Haenszel estimator of odds ratio". Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22849.

Pełny tekst źródła
Streszczenie:
The present work is an endeavour to gain an understanding of the Mantel-Haenszel (MH) estimator beyond that which can be garnered from textbook consultation. To this end, I explore two aspects of the estimator which cannot be fully understood from the textbooks: the rationale for its weights, and its performance. The first one has to do with the basis for the particular choice of the estimator's weights, and the second one with the estimator's distributional properties and their assessment.
The approach to each of the topics is, first, to identify and understand what is presented in the original literature; and second, critically to examine the findings in light of established principles of data analysis, to shed new light on the issues that turn out to be ill-understood.
Rather striking new understandings arise. Theoretical justification for the "weights" of the cross-products cannot be found in the literature. The textbook conceptualization of the MH estimator as a weighted average of stratum-specific estimates of odds ratio (unconditional) is supported by the original literature, yet untenable.
A new and tenable conceptualization of the estimator is proposed. Unrecognized in the literature, the stratum-specific cross-products involve a random aspect of the data, and the structure of the estimator is hence unjustifiable in this respect. A first order improvement is proposed and illustrated using examples.
Theoretical evaluation of the estimator's performance is limited by the literature's focus on one of the asymptotic situations. Results of empirical evaluations of the estimator are in accord with textbook claims, but are still too limited. (Abstract shortened by UMI.)
Style APA, Harvard, Vancouver, ISO itp.
12

Guo, Changbin. "Bayesian Reference Inference on the Ratio of Poisson Rates". Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2194.

Pełny tekst źródła
Streszczenie:
Bayesian reference analysis is a method of determining the prior under the Bayesian paradigm. It incorporates as little information as possible from the experiment. Estimation of the ratio of two independent Poisson rates is a common practical problem. In this thesis, the method of reference analysis is applied to derive the posterior distribution of the ratio of two independent Poisson rates, and then to construct point and interval estimates based on the reference posterior. In addition, the Frequentist coverage property of HPD intervals is verified through simulation.
Style APA, Harvard, Vancouver, ISO itp.
13

Cai, Bing. "The case-crossover design, an efficient rate ratio estimator based on prescription times". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0017/MQ55042.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Cai, Bing. "The case-crossover design : an efficient rate ratio estimator based on prescription times". Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30116.

Pełny tekst źródła
Streszczenie:
The case-crossover design is a new epidemiological method that evolved around binary exposures and the binomial distribution. We develop a new approach of data analysis for this design based on the actual exposure occurrence times, such as those available from computerized prescription databases. Assuming an exponential distribution for the inter-exposure onset times, we derive two new matched-paired estimators of the odds-ratio, one weighted the other unweighted. A simulation study demonstrates that both new estimators based on the exponential distribution are more efficient than the classical estimator based on the binomial distribution and that the unweighted estimator appears to be the most valid. These new estimators of the odds-ratio are also more flexible and amenable to verifying some of the assumptions behind the case-crossover design. We illustrate this approach with data on 54 asthma deaths identified from the Saskatchewan Health databases, to assess the association with the use of inhaled beta-agonists.
Style APA, Harvard, Vancouver, ISO itp.
15

Muller, Fernanda Maria. "MELHORAMENTOS INFERENCIAIS NO MODELO BETA-SKEW-T-EGARCH". Universidade Federal de Santa Maria, 2016. http://repositorio.ufsm.br/handle/1/8394.

Pełny tekst źródła
Streszczenie:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The Beta-Skew-t-EGARCH model was recently proposed in literature to model the volatility of financial returns. The inferences over the model parameters are based on the maximum likelihood method. The maximum likelihood estimators present good asymptotic properties; however, in finite sample sizes they can be considerably biased. Monte Carlo simulations were used to evaluate the finite sample performance of point estimators. Numerical results indicated that the maximum likelihood estimators of some parameters are biased in sample sizes smaller than 3,000. Thus, bootstrap bias correction procedures were considered to obtain more accurate estimators in small samples. Better quality of forecasts was observed when the model with bias-corrected estimators was considered. In addition, we propose a likelihood ratio test to assist in the selection of the Beta-Skew-t-EGARCH model with one or two volatility components. The numerical evaluation of the two-component test showed distorted null rejection rates in sample sizes smaller than or equal to 1,000. To improve the performance of the proposed test in small samples, the bootstrap-based likelihood ratio test and the bootstrap Bartlett correction were considered. The bootstrap-based test exhibited the closest null rejection rates to the nominal values. The evaluation results of the two-component tests showed their practical usefulness. Finally, an application to the log-returns of the German stock index of the proposed methods was presented.
O modelo Beta-Skew-t-EGARCH foi recentemente proposto para modelar a volatilidade de retornos financeiros. A estimação dos parâmetros do modelo é feita via máxima verossimilhança. Esses estimadores possuem boas propriedades assintóticas, mas em amostras de tamanho finito eles podem ser consideravelmente viesados. Com a finalidade de avaliar as propriedades dos estimadores, em amostras de tamanho finito, realizou-se um estudo de simulações de Monte Carlo. Os resultados numéricos indicam que os estimadores de máxima verossimilhança de alguns parâmetros do modelo são viesados em amostras de tamanho inferior a 3000. Para obter estimadores pontuais mais acurados foram consideradas correções de viés via o método bootstrap. Verificou-se que os estimadores corrigidos apresentaram menor viés relativo percentual. Também foi observada melhor qualidade das previsões quando o modelo com estimadores corrigidos são considerados. Para auxiliar na seleção entre o modelo Beta-Skew-t-EGARCH com um ou dois componentes de volatilidade foi apresentado um teste da razão de verossimilhanças. A avaliação numérica do teste de dois componentes proposto demonstrou taxas de rejeição nula distorcidas em tamanhos amostrais menores ou iguais a 1000. Para melhorar o desempenho do teste foram consideradas a correção bootstrap e a correção de Bartlett bootstrap. Os resultados numéricos indicam a utilidade prática dos testes de dois componentes propostos. O teste bootstrap exibiu taxas de rejeição nula mais próximas dos valores nominais. Ao final do trabalho foi realizada uma aplicação dos testes de dois componentes e do modelo Beta-Skew-t-EGARCH, bem como suas versões corrigidas, a dados do índice de mercado da Alemanha.
Style APA, Harvard, Vancouver, ISO itp.
16

Sayre, Michelle Marie. "Development of a Block Processing Carrier to Noise Ratio Estimator for the Global Positioning System". Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1071063030.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Newson, Paul. "Adaptive algorithms for equalisers and channel estimators for use within digital mobile radio systems". Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/15508.

Pełny tekst źródła
Streszczenie:
This thesis is primarily concerned with the development of adaptive algorithms for equaliser coefficient computation within the highly time variant radio communications environment. More specifically the problem of equaliser coefficient computation within the pan European digital mobile radio system is considered. The work encompasses both equaliser and channel estimator adaptive algorithms and techniques for the automatic synthesis of linear transversal equalisers (LTE), decision feedback equalisers (DFE) and adaptive maximum likelihood sequence estimators (MLSE) are developed. Within the thesis it is shown that equaliser performance can be significantly improved by adaptively updating the equaliser throughout unknown data transmission. Initially the performance of conventional techniques, such as gradient search (GS) and least squares (LS) algorithms, when employed in this respect is investigated. Although each is shown to yield performance improvement over the system in which no adaptive update is employed, it is shown that under highly time variant conditions the performance of the conventional algorithms is subject to several limitations. This conclusion provides the motivation for the development of a number of alternative adaptive algorithms which offer performance advantage under highly time variant conditions. Two classes of algorithm are proposed. Within each a priori knowledge of the time variant characteristics of the channel is used in order to partially cancel estimation error due to the channel time variation. Within the first this is achieved by augmenting the update equation of each of the conventional algorithms by inclusion of an additional parameter set representing an estimate of the rate of change (ROC) of the channel coefficients. The algorithms are thereby able to form a prediction of the instantaneous channel variation and, therefore, to compensate for the channel non-stationarity. In the second class of algorithm a predetermined model of channel time variability is incorporated directly into the algorithm structure.
Style APA, Harvard, Vancouver, ISO itp.
18

DE, PAOLA ROSITA. "Median estimation using auxiliary variables". Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/36075.

Pełny tekst źródła
Streszczenie:
In the present study the estimation of the median has been taken into consideration using different methods of analysis. First of all the estimation of the median without auxiliary information is analyzed. Then the method of Kuk and Mak proposed in 1989 is exposed: this way of estimating the median is based on the knowledge of the population median of auxiliary variable X. Another method, which considers the median of the auxiliary variable is the ratio estimator. Then two methods based on the regression estimator are analyzed : the first one considers the regression based on the median regression, the second one is based on the minimum square method. Two experiments have been carried out in order to compare the methods proposed. First of all the methods are compared selecting all possible samples from nine di_erent small populations. The second application is based on the selection of couples of random numbers from a bivariate random variable distributed as a Bivariate Log-Normal distribution. Also in this situation the methods of estimation of the median are compared considering the expected values and mean square errors.
Style APA, Harvard, Vancouver, ISO itp.
19

Godfrey, Matthew Howland. "Sex ratios of sea turtle hatchlings, direct and indirect estimates". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0004/NQ27932.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Edvinsson, Simon. "Estimation of the local Hurst function of multifractional Brownian motion : A second difference increment ratio estimator". Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105770.

Pełny tekst źródła
Streszczenie:
In this thesis, a specific type of stochastic processes displaying time-dependent regularity is studied. Specifically, multifractional Brownian motion processes are examined. Due to their properties, these processes have gained interest in various fields of research. An important aspect when modeling using such processes are accurate estimates of the time-varying pointwise regularity. This thesis proposes a moving window ratio estimator using the distributional properties of the second difference increments of a discretized multifractional Brownian motion. The estimator captures the behaviour of the regularity on average. In an attempt to increase the accuracy of single trajectory pointwise estimates, a smoothing approach using nonlinear regression is employed. The proposed estimator is compared to an estimator based on the Increment Ratio Statistic.
I denna uppsats studeras en specifik typ av stokastiska processer, vilka uppvisar tidsberoende regelbundenhet. Specifikt behandlas multifraktionella Brownianska rörelser då deras egenskaper föranlett ett ökat forskningsintresse inom flera fält. Vid modellering med sådana processer är noggranna estimat av den punktvisa, tidsberoende regelbundenheten viktig. Genom att använda de distributionella egenskaperna av andra ordningens inkrement i ett rörligt fönster, är det möjligt att skatta den punktvisa regelbundenheten av en sådan process. Den föreslagna estimatorn uppnår i genomsnitt precisa resultat. Dock observeras hög varians i de punktvisa estimaten av enskilda trajektorier. Ickelinjär regression appliceras i ett försök att minska variansen i dessa estimat. Vidare presenteras ytterligare en estimator i utvärderingssyfte.
Style APA, Harvard, Vancouver, ISO itp.
21

Lundy, Erin. "The effect of assigning different index dates for control exposure measurement on odds ratio estimates". Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110733.

Pełny tekst źródła
Streszczenie:
In case-control studies it is reasonable to consider the exposure history of a case prior to disease onset. For the controls, it is necessary to define comparable periods of exposure opportunity. Motivated by data from a case-control study of the environmental risk factors for Multiple Sclerosis we propose a control-to-case matching algorithm that assigns pseudo ages at onset, index ages, to the controls. Based on a simulation study, we conclude that our index age algorithms yields a greater power than the default method of assigning a control's current age as their index age, especially for moderate effects. Furthermore, we present theoretical results that show that for binary and ordered categorical exposure variables using an inappropriate index age assignment method can obscure or even mask a true effect. The effect of the choice of index age assignment method on the inference on the odds ratio is highly data dependent. In contrast to the results of our simulation study, our analysis of the data from the motivating case-control study resulted in odds ratio and variance estimates that were very similar regardless of the choice of the method of assigning index ages.
Dans les études cas- témoins il est raisonnable de considérer que l'histoire de l'exposition d'un cas avant l'apparition de la maladie. Pour les témoins, il est nécessaire de définir des périodes de l'occasion d'exposition qui sont comparables. Motives par des données provenant d'une étude cas- témoins des facteurs de risque environnementaux pour la sclérose en plaques, nous proposons un cas- témoins algorithme de comparaison qui affecte des âges pseudo a l'apparition, âges d'index, aux témoins. Nous concluons, base sur une étude de simulation, que nos algorithme pour d'âges d'index donnent une plus grande puissance que la méthode défaut d'affecter l'âge actuel d'une témoins comme son âge d'index, particulièrement pour les effets modères. En plus, nous présentons des résultats théoriques qui montrent que pour des variables binaire et des variables ordinale, l'utilisation d'une méthode d'affectation inappropriée peut obscurcir ou mémé masquer un véritable effet. L'effet du choix de la méthode d'affectation sur l'inférence sur le rapport de cotes est très dépendant des données. En contraste avec le reste de notre étude de simulation, notre analyse des données de l' étude cas- témoins motivant a produit des estimations de le rapport de cotes et variance qui étaient très semblables quelque soit le choix de la méthode d'affectation des l'âges d'index.
Style APA, Harvard, Vancouver, ISO itp.
22

DIAS, MAURICIO HENRIQUE COSTA. "ACTUAL MOBILE RADIO PROPAGATION CHANNEL RESPONSES ESTIMATES IN THE SPATIAL AND TEMPORAL DOMAINS". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=3502@1.

Pełny tekst źródła
Streszczenie:
INSTITUTO MILITAR DE ENGENHARIA
No cenário atual das telecomunicações móveis, os arranjos de antenas voltaram a receber grande atenção dos pesquisadores, especialmente quando esquemas adaptativos de modificação de seus diagramas de radiação são utilizados. Uma das aplicações que exploram o potencial dos arranjos de antenas é o seu uso como forma de aumentar consideravelmente a eficiência espectral dos sistemas móveis atuais e da próxima geração. A outra aplicação em evidência está voltada para sistemas de localização de posição, pois algumas das técnicas conhecidas envolvem a estimação de ângulos-de-chegada usando arranjos de antenas. Diante destas possibilidades, cresce em importância o estudo das variações do canal de propagação rádio móvel no domínio em que o uso dos arranjos de antenas atua: o espacial. O presente trabalho procura contribuir para o contexto em questão, com uma investigação experimental do canal real rádio-móvel nos domínios temporal (retardos) e espacial (ângulos-de-chegada). No que se refere ao contexto nacional, contribuições similares baseadas em simulações já são encontradas; baseadas em medidas não. Em particular, sondagens na faixa de 1,8 GHz em ambientes internos típicos foram realizadas. Duas técnicas distintas de sondagem temporalespacial foram implementadas, tomando por base uma sonda de canal faixa-larga montada e testada com sucesso, como contribuição principal de uma dissertação de mestrado recentemente apresentada por um integrante do mesmo grupo de pesquisa ao qual esta tese está vinculada. Uma das técnicas sintetiza o arranjo realizando as sondagens com uma única antena que é sucessivamente deslocada para ocupar as posições correspondentes às dos elementos do arranjo. A outra técnica emprega um arranjo real. Em ambas, a configuração mais simples para um arranjo foi utilizada: a linear uniforme. As sondagens não forneciam diretamente os espectros espaciais-temporais. As estimativas dos espectros foram processadas posteriormente, aplicando técnicas como o correlograma para o domínio do retardo, e quatro técnicas distintas para o domínio espacial, que foi o foco principal deste trabalho: duas convencionais; e duas paramétricas, com potencial de aumentar a resolução das estimativas, assumindo hipóteses razoáveis sobre as respostas esperadas. De posse das respostas espectrais estimadas, comparações com estimativas teóricas permitiram uma análise de desempenho das técnicas utilizadas. Adicionalmente à investigação experimental do canal espacial, procurou-se verificar o potencial da aplicação da teoria de wavelets ao estudo do canal rádiomóvel. Em especial, uma das principais aplicações daquela teoria foi testada como técnica de pós-processamento das respostas espectrais no domínio do retardo. A supressão de ruído por decomposição wavelet foi aplicada a um vasto conjunto de medidas de canal disponíveis, fruto de trabalhos anteriores do grupo de pesquisa ao qual esta tese está vinculada, com resultados expressivos.
In the present mobile communications scenario, researchers have turned once again special attention to antennae arrays, particularly when adaptive schemes are employed to modify its radiation patterns. One of its main applications results in considerable increases to the spectral efficiency of present and next generation mobile systems. The other major application is headed towards position location systems, since some of the known techniques comprise angle-of-arrival estimation using antennae arrays. Under such possibilities, mobile radio propagation channel variations studies grow in relevance, specially regarding the antennae arrays main domain of action: the spatial domain. The present work tries to contribute to the overstated context, experimentally investigating the actual mobile radio channel over the temporal (delays) and spatial (angles of arrival) domains. Regionally speaking, similar contributions based on simulations are already found, but none based on measurements. In special, 1.8 GHz indoor soundings have been carried out. Two different temporal spatial sounding techniques have been deployed, based on na available wideband channel sounder successfully assembled and tested as the major contribution of a MSc. dissertation recently presented by a member of the same research team to which this thesis belongs. One of such techniques sinthesyzes the array carrying the sounding out with a single antenna, which is successively moved to occupy the spots corresponding to the array elements. The other method employs an actual array. For both cases, the simplest array configuration has been used: the uniform linear one. Space-time spectra were not directly available in real time during the soundings. Its estimates have been processed later, applying techniques such as the correlogram over the delay domain, and four distinct methods over the spatial domain, the main focus of the present work. Two conventional methods have been used, as well as two parametric ones, potentially capable to increase the estimates resolution, assuming reasonable hypotheses regarding the expected responses. With the estimated spectral responses in hands, comparisons with theoretical estimates allowed a performance assessment of the employed methods. In addition to the spatial channel experimental investigation, the wavelets theory potential of application to the mobile-radio channel study has been checked out. Notably, one of the wavelets theory major applications has been tested as a post-processing technique to improve delay-domain spectral responses. Wavelet decomposition based de-noising has been applied to a huge measurements ensemble, available as the product of previous works of the research group to which this thesis is attached, leading to remarkable results.
Style APA, Harvard, Vancouver, ISO itp.
23

Kilcioglu, Caglar. "Fpga Implementation Of Jointly Operating Channel Estimator And Parallelized Decoder". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610946/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis, implementation details of a joint channel estimator and parallelized decoder structure on an FPGA-based platform is considered. Turbo decoders are used for the decoding process in this structure. However, turbo decoders introduce large decoding latencies since they operate in an iterative manner. To overcome that problem, parallelization is applied to the turbo codes and the resulting parallel decodable turbo code (PDTC) structure is employed for coding. The performance of a PDTC decoder and parameters affecting its performance is given on an additive white Gaussian noise (AWGN) channel. These results are compared with the results of a parallel study which employs a different architecture in implementing the PDTC decoder. In the fading channel case, a pilot symbol assisted estimation method is employed for the channel estimation process. In this method, the channel coefficients are estimated by a 2-way LMS (least mean-squares) algorithm. The difficulties in the implementation of this joint structure in a fixed-point arithmetic and the solutions to overcome these difficulties are described in details. The proposed joint structure is tested with varying design parameters over a Rayleigh fading channel. The overall decoding latencies and allowed data rates are calculated after obtaining a reasonable performance from the design.
Style APA, Harvard, Vancouver, ISO itp.
24

Chesson, Kristin Elaine. "A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2". Thesis, [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1944.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Kenyon, Jonathan. "PyMORESANE: A Pythonic and CUDA-accelerated implementation of the MORESANE deconvolution algorithm". Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1020098.

Pełny tekst źródła
Streszczenie:
The inadequacies of the current generation of deconvolution algorithms are rapidly becoming apparent as new, more sensitive radio interferometers are constructed. In light of these inadequacies, there is renewed interest in the field of deconvolution. Many new algorithms are being developed using the mathematical framework of compressed sensing. One such technique, MORESANE, has recently been shown to be a powerful tool for the recovery of faint difuse emission from synthetic and simulated data. However, the original implementation is not well-suited to large problem sizes due to its computational complexity. Additionally, its use of proprietary software prevents it from being freely distributed and used. This has motivated the development of a freely available Python implementation, PyMORESANE. This thesis describes the implementation of PyMORESANE as well as its subsequent augmentation with MPU and GPGPU code. These additions accelerate the algorithm and thus make it competitive with its legacy counterparts. The acceleration of the algorithm is verified by means of benchmarking tests for varying image size and complexity. Additionally, PyMORESANE is shown to work not only on synthetic data, but on real observational data. This verification means that the MORESANE algorithm, and consequently the PyMORESANE implementation, can be added to the current arsenal of deconvolution tools.
Style APA, Harvard, Vancouver, ISO itp.
26

Korkmaz, Yusuf. "Tracking Of Multiple Ground Targets In Clutter With Interacting Multiple Model Estimator". Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615727/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis study, single target tracking algorithms including IMM-PDA and IMM-IPDA algorithms
Optimal approaches in multitarget tracking including IMM-JPDA, IMM-IJPDA and IMM-JIPDA algorithms and an example of Linear Multi-target approaches in multitarget tracking including IMM-LMIPDA algorithm have been studied and implemented in MATLAB for comparison. Simulations were carried out in various realistic test scenarios including single target tracking, tracking of multiple targets moving in convoy fashion, two targets merging in a junction, two targets merging-departing in junctions and multitarget tracking under isolated tracks situations. RMSE performance, track loss and computational load evaluations were done for these algorithms under the test scenarios dealing with these situations. Benchmarkings are presented relying on these outcomes.
Style APA, Harvard, Vancouver, ISO itp.
27

Mguda, Zolile Martin. "Bent tail radio sources as tracers of galaxy clusters at high redshift and SMBH mass estimates". Doctoral thesis, Faculty of Science, 2021. http://hdl.handle.net/11427/33807.

Pełny tekst źródła
Streszczenie:
Bent tail radio sources (BTRSs) are radio galaxies which have jets that show a characteristic C‐shape that is believed to be due to ram pressure caused by the motion of the galaxy through the ambient medium. They are generally found in galaxy clusters in the local Universe. They have already been used in observations as tracers of galaxy clusters at redshifts of up to z _ 1. They have, however, been shown to be numerous in galaxy groups as well. The ability to find high redshift galaxy clusters is important in cosmology because they are important cosmological probes. According to the _ CDM model, galaxy clusters form around redshift of z _ 2 and finding clusters of halo mass greater than 1014 M_ at redshift greater than z = 2:5 would disprove the current concordance model. Finding galaxy clusters at those redshifts is more feasible with the new generation of radio telescopes and the upcoming square kilometer array (SKA). In this work we look at some SMBH mass measurements, which are crucial in the determination of the correlations between the SMBH mass and some galaxy characteristics including jet length and luminosity. The high redshift SMBH mass measurement methods are calibrated using local Universe correlations. This makes SMBH mass measurement an important aspect in the study of high redshift radio galaxies and hence BTRSs. We use cosmological simulations from the MareNostrum Universe simulation to look at the efficacy of using BTRSs as tracers of clusters assuming the ram pressure is the cause of the jet bending. This is the first step in predicting the possible number of BTRSs that we may observe with the SKA. We find that SMBH masses can be measured up to redshift of z = 4:5 using the virial mass estimator method. The BTRSs are equally likely to be found in galaxy clusters and galaxy groups in the local Universe. This means that around 50% of the BTRSs that we are likely to find at high redshift will be in galaxy clusters. However, finding a pair of BTRSs in close proximity is a sign of a galaxy cluster environment. These results are still dependent on the resolution of degeneracies in our understanding of the duty cycles of AGN radio jets, projection effects of the radio jets, the environmental dependence of radio‐loudness in galaxies and other open questions.
Style APA, Harvard, Vancouver, ISO itp.
28

Gottfridsson, Anneli. "Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution". Thesis, Linköpings universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-69738.

Pełny tekst źródła
Streszczenie:
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
Style APA, Harvard, Vancouver, ISO itp.
29

Liang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models". Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.

Pełny tekst źródła
Streszczenie:
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
Style APA, Harvard, Vancouver, ISO itp.
30

Ureten, Suzan. "Single and Multiple Emitter Localization in Cognitive Radio Networks". Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/35692.

Pełny tekst źródła
Streszczenie:
Cognitive radio (CR) is often described as a context-intelligent radio, capable of changing the transmit parameters dynamically based on the interaction with the environment it operates. The work in this thesis explores the problem of using received signal strength (RSS) measurements taken by a network of CR nodes to generate an interference map of a given geographical area and estimate the locations of multiple primary transmitters that operate simultaneously in the area. A probabilistic model of the problem is developed, and algorithms to address location estimation challenges are proposed. Three approaches are proposed to solve the localization problem. The first approach is based on estimating the locations from the generated interference map when no information about the propagation model or any of its parameters is present. The second approach is based on approximating the maximum likelihood (ML) estimate of the transmitter locations with the grid search method when the model is known and its parameters are available. The third approach also requires the knowledge of model parameters but it is actually based on generating samples from the joint posterior of the unknown location parameter with Markov chain Monte Carlo (MCMC) methods, as an alternative for the highly computationally complex grid search approach. For RF cartography generation problem, we study global and local interpolation techniques, specifically the Delaunay triangulation based techniques as the use of existing triangulation provides a computationally attractive solution. We present a comparative performance evaluation of these interpolation techniques in terms of RF field strength estimation and emitter localization. Even though the estimates obtained from the generated interference maps are less accurate compared to the ML estimator, the rough estimates are utilized to initialize a more accurate algorithm such as the MCMC technique to reduce the complexity of the algorithm. The complexity issues of ML estimators based on full grid search are also addressed by various types of iterative grid search methods. One challenge to apply the ML estimation algorithm to multiple emitter localization problem is that, it requires a pdf approximation to summands of log-normal random variables for likelihood calculations at each grid location. This inspires our investigations on sum of log-normal approximations studied in literature for selecting the appropriate approximation to our model assumptions. As a final extension of this work, we propose our own approximation based on distribution fitting to a set of simulated data and compare our approach with Fenton-Wilkinson's well-known approximation which is a simple and computational efficient approach that fits a log-normal distribution to sum of log-normals by matching the first and second central moments of random variables. We demonstrate that the location estimation accuracy of the grid search technique obtained with our proposed approximation is higher than the one obtained with Fenton-Wilkinson's in many different case scenarios.
Style APA, Harvard, Vancouver, ISO itp.
31

Seguro, Requejo Maria Isabel. "Shelf-sea gross and net production estimates from triple oxygen isotopes and oxygen-argon ratios in relation with phytoplankton physiology". Thesis, University of East Anglia, 2017. https://ueaeprints.uea.ac.uk/69374/.

Pełny tekst źródła
Streszczenie:
Shelf seas represent only 10 % of the ocean area, but support 30 % of oceanic primary production. There are few measurements of biological production at high spatial and temporal resolution in such physically dynamic systems. Here, I use dissolved oxygento- argon (O2/Ar) ratios and triple oxygen isotopes (δ(17O), δ(18O)) to estimate net and gross biological production seasonally in the Celtic Sea between summer 2014 and summer 2015, as part of the NERC Shelf-Sea Biogeochemistry programme. O2/Ar was measured continuously using a shipboard membrane inlet mass spectrometer. Discrete water samples from hydrocasts were used to measure O2/Ar, δ(17O) and δ(18O) depth profiles. The data were combined with wind-speed based gas exchange parameterisations to calculate biological air-sea oxygen fluxes. These fluxes were corrected for non-steady state and diapycnal diffusion to give net community production (N(O2/Ar)) and gross O2 production (G(17O)). N(O2/Ar) was highest in spring at (33±41) mmol m-2 d-1, and G(17O) was highest in summer at (494±370) mmol m-2 d-1, while autumn was net heterotrophic with N(O2/Ar) = (–14±28) mmol m-2 d-1. During spring, biological production was spatially heterogeneous, highlighting the importance of high resolution biological production measurements. The ratio of N(O2/Ar) to G(17O), ƒ(O2), was highest in spring at 0.18±0.03 corresponding to 0.34±0.06 in carbon equivalents; about 0.05 in summer and < 0 in autumn/winter. Statistical measurement uncertainties increase when terms other than air-sea exchange fluxes are included in the calculations. Additionally, electron transfer rate derived from fast repetition rate fluorometry measurements was compared with G(17O), but no simple relationship was found. This study characterised the seasonal biological patterns in production rates and shows that the Celtic Sea is a net carbon sink in spring and summer. Such measurements can help reconcile the differences between satellite and in situ productivity estimates, and improve our understanding of the biological carbon pump.
Style APA, Harvard, Vancouver, ISO itp.
32

Camlica, Sedat. "Recursive Passive Localization Methods Using Time Difference Of Arrival". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/2/12611032/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis, the passive localization problem is studied. Robust and recursive solutions are presented by the use of Time Difference of Arrival (TDOA). The TDOA measurements are assumed to be gathered by moving sensors which makes the number of the sensors increase synthetically. First of all, a location estimator should be capable of processing the new measurements without omitting the past data. This task can be accomplished by updating the estimate recursively whenever new measurements are available. Convenient forms of the recursive filters, such as the Kalman filter, the Extended Kalman filter etc., can be applied. Recursive filter can be divided to two major groups: (a) The first type of recursive estimators process the TDOA measurements directly, and (b) the second type of the recursive estimators is the post processing estimators which process the TDOA indirectly, instead they fuse or smooth available location estimates. In this sense, recursive passive localization methods are presented for both types. In practice, issues like being spatially distant from each other and/or a radar with a rotating narrow beam may prevent the sensors to receive the same pulse. In such a case, the sensors can not construct common TDOA measurements which means that they can not accomplish the location estimation procedure. Additionally, there may be more than one sensor group making TDOA measurements. An estimator should be capable of fusing the measurements from different sensor groups. A sensor group consists of sensors which are able to receive the same pulse. In this work, solutions of these tasks are also given. Performances of the presented methods are compared by simulation studies. The method having the best performance, which is based on the Kalman Filter, is also capable of estimating the track of a moving emitter by directly processing the TDOA measurements.
Style APA, Harvard, Vancouver, ISO itp.
33

Gismalla, Yousif Ebtihal. "Performance analysis of spectrum sensing techniques for cognitive radio systems". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/performance-analysis-of-spectrum-sensing-techniques-for-cognitive-radio-systems(157fe1af-717c-4705-a649-d809766cf5cb).html.

Pełny tekst źródła
Streszczenie:
Cognitive radio is a technology that aims to maximize the current usage of the licensed frequency spectrum. Cognitive radio aims to provide services for license-exempt users by making use of dynamic spectrum access (DSA) and opportunistic spectrum sharing strategies (OSS). Cognitive radios are defined as intelligent wireless devices capable of adapting their communication parameters in order to operate within underutilized bands while avoiding causing interference to licensed users. An underused band of frequencies in a specific location or time is known as a spectrum hole. Therefore, in order to locate spectrum holes, reliable spectrum sensing algorithms are crucial to facilitate the evolution of cognitive radio networks. Since a large and growing body of literature has mainly focused into the conventional time domain (TD) energy detector, throughout this thesis the problem of spectrum sensing is investigated within the context of a frequency domain (FD) approach. The purpose of this study is to investigate detection based on methods of nonparametric power spectrum estimation. The considered methods are the periodogram, Bartlett's method, Welch overlapped segments averaging (WOSA) and the Multitaper estimator (MTE). Another major motivation is that the MTE is strongly recommended for the application of cognitive radios. This study aims to derive the detector performance measures for each case. Another aim is to investigate and highlight the main differences between the TD and the FD approaches. The performance is addressed for independent and identically distributed (i.i.d.) Rayleigh channels and the general Rician and Nakagami fading channels. For each of the investigated detectors, the analytical models are obtained by studying the characteristics of the Hermitian quadratic form representation of the decision statistic and the matrix of the Hermitian form is identified. The results of the study have revealed the high accuracy of the derived mathematical models. Moreover, it is found that the TD detector differs from the FD detector in a number of aspects. One principal and generalized conclusion is that all the investigated FD methods provide a reduced probability of false alarm when compared with the TD detector. Also, for the case of periodogram, the probability of sensing errors is independent of the length of observations, whereas in time domain the probability of false alarm is increased when the sample size increases. The probability of false alarm is further reduced when diversity reception is employed. Furthermore, compared to the periodogram, both Bartlett method and Welch method provide better performance in terms of lower probability of false alarm but an increased probability of detection for a given probability of false alarm. Also, the performance of both Bartlett's method and WOSA is sensitive to the number of segments, whereas WOSA is also sensitive to the overlapping factor. Finally, the performance of the MTE is dependent on the number of employed discrete prolate spheroidal (Slepian) sequences, and the MTE outperforms the periodogram, Bartlett's method and WOSA, as it provides the minimal probability of false alarm.
Style APA, Harvard, Vancouver, ISO itp.
34

Wiklander, Fanny, i Emma Roos. "I vilken utsträckning används ekonomistyrning? : en studie av fyra företag". Thesis, University of Gävle, Department of Business Administration and Economics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-4772.

Pełny tekst źródła
Streszczenie:

 

ABSTRACT

 

   Titel: I vilken utsträckning används ekonomistyrning i praktiken?

 - En studie av fyra företag

 

Nivå: C-uppsats i ämnet företagsekonomi (15 hp)

 

Författare: Emma Roos och Fanny Wiklander

 

Handledare: Mats Ryding

 

Datum: 2009-05

 

Syfte: Syftet med denna uppsats är att få en inblick i hur verkliga företag arbetar med ekonomisk styrning mot bakgrund av vår teoristudie och det vi lärt oss i vår utbildning. Vi har intervjuat fyra företag inom olika branscher för att ta reda på i vilken utsträckning de använder sig av ekonomisk redovisning. Vi har också undersökt om dessa företag använder affärssystem och vilken betydelse dessa har, samt om de påverkats av finanskrisen och därför blivit mer noggranna och försiktiga när de ska ta ekonomiska beslut.

 

Metod: Vi har använt oss av den kvalitativa metoden i denna uppsats. I den kvalitativa metoden finns det en fysisk närhet till det forskningsobjekt man studerar då man helst ska möta respondenten ansikte mot ansikte. Detta passar oss bra då vi har haft personliga intervjuer med samtliga respondenter. Vi har även använt oss av en fallstudie. Denna undersökningsmetod innebär att man undersöker "en liten del av ett stort förlopp och med hjälp av fallet beskriver man verkligheten och säger att fallet i fråga får representera verkligheten". Dock har vi varit försiktig i vår analys och inte dragit allt för generella och breda slutsatser, eftersom vi endast baserar vår studie på fyra företag.

 

Resultat & slutsats: Det har varit intressant att se hur verkliga företag arbetar med ekonomistyrning. Majoriteten av företagen använder sig i relativt stor utsträckning av ekonomiska hjälpmedel, även om det varierar mellan vilka sorts verktyg som prioriteras. Framförallt ser vi att affärssystemen är ett viktigt hjälpmedel i företagen. Finess är undantaget i denna studie, de använder sig i väldigt lite utsträckning av intern redovisning och har inget ekonomisystem, men har ändå en väl fungerande verksamhet.

 

Förslag till fortsattforskning: Vi tycker att det skulle vara intressant att utgå från ett liknande ämne, men istället jämföra företag inom samma bransch. På detta sätt skulle man kunna dra mer generella slutsatser om just den branschen och eventuellt också se om vissa styrmedel är mer effektiva än andra. Ett annat alternativ är att man, för att få en ordentlig överblick i hur företag arbetar, också tittar på vilka organisatoriska och marknadsföringsmässiga styrmedel företag använder sig av.

 

Uppsatsens bidrag: Uppsatsen har bidragit till en ökad förståelse för hur faktiska företag använder ekonomiska verktyg för att styra sin verksamhet.  

 

Nyckelord: Ekonomistyrning, intern redovisning, budgetering, kalkylering, nyckeltal

Style APA, Harvard, Vancouver, ISO itp.
35

Curuk, Selva Muratoglu. "Highly Efficient New Methods Of Channel Estimation For Ofdm Systems". Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609290/index.pdf.

Pełny tekst źródła
Streszczenie:
In the first part, the topic of average channel capacity for Orthogonal Frequency Division Multiplexing (OFDM) under Rayleigh, Rician, Nakagami-m, Hoyt, Weibull and Lognormal fading is addressed. With the assumption that channel state information is known, we deal with a lower bound for the capacity and find closed computable forms for Rician fading without diversity and with Maximum Ratio Combining diversity at the receiver. Approximate expressions are also provided for the capacity lower bound in the case of high Signal to Noise Ratio. This thesis presents two simplified Maximum A Posteriori (MAP) channel estimators to be used in OFDM systems under frequency selective slowly varying Rayleigh fading. Both estimators use parametric models, where the first model assumes exponential frequency domain correlation while the second model is based on the assumption of exponential power delay profile. Expressions for the mean square error of estimations are derived and the relation between the correlation of subchannel taps and error variance is investigated. Dependencies of the proposed estimators&rsquo
performances on the model parameter and noise variance estimation errors are analyzed. We also provide approximations on the estimators&rsquo
algorithms in order to make the estimators practical. Finally, we investigate SER performance of the simplified MAP estimator based on exponential power delay profile assumption used for OFDM systems with QPSK modulation. The results indicate that the proposed estimator performance is always better than that of the ML estimator, and as the subchannel correlation increases the performance comes closer to that of perfectly estimated channel case.
Style APA, Harvard, Vancouver, ISO itp.
36

Ribeiro, Antonio Marcelo Oliveira 1970. "Contribuições à caracterização estatística do canal de rádio móvel e estimação de parâmetros por máxima verossimilhança". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260749.

Pełny tekst źródła
Streszczenie:
Orientador: Evandro Conforti
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-23T23:28:57Z (GMT). No. of bitstreams: 1 Ribeiro_AntonioMarceloOliveira_D.pdf: 6175407 bytes, checksum: 03c529ed2452256d1369e23e047687ec (MD5) Previous issue date: 2013
Resumo: Os efeitos provocados pelo ambiente de propagação sobre o sinal transmitido, assim como as condições impostas pela mobilidade do receptor, afetam diretamente a qualidade de serviço em sistemas de comunicação sem fios. Portanto, é necessário compreender e analisar os efeitos de degradação que o canal terá sobre um dado sistema de comunicação de dados e, dessa forma, avaliar a necessidade de medidas para mitigar os eventuais efeitos prejudiciais do canal. Neste trabalho, apresenta-se uma caracterização estatística do canal de rádio móvel, a partir de medições em campo nas bandas de 1800, 2500 e 3500 MHz, através de uma técnica simples de aquisição da envoltória do sinal. Em particular, são calculadas, para a envoltória, funções de distribuição de probabilidade, taxas de cruzamentos, duração de desvanecimento e sua distribuição, funções de correlação espacial e em frequência, tempo de coerência e largura de banda de coerência. Realiza-se, igualmente, uma análise comparativa destes resultados com os seguintes modelos estatísticos: Rayleigh, Nakagami, Rice, Weibull, Hoyt (Nakagami-q) e ?-?. Além disso, é dada ênfase à estimação de parâmetros dos modelos de canal de rádio, através de dois métodos: momentos (MoM) e máxima verossimilhança (ML). Neste contexto, obtém-se expressões para a variância e o intervalo de confiança, assintóticos, de estimadores ML, baseadas na informação de Fisher que uma amostra aleatória contém a respeito do parâmetro a ser estimado. De forma geral, foi observado um bom ajuste entre as medidas em campo e correspondentes curvas teóricas, para estatísticas de primeira e segunda ordem da envoltória. As medições em campo deste trabalho mostraram que os estimadores ML agruparam mais as curvas teóricas, em torno da curva experimental, quando comparados aos estimadores MoM. Adicionalmente, a matriz de covariância dos estimadores ML para ? e ?, obtida a partir das medições em campo, mostrou que a variância do estimador de ? é, pelo menos, dez vezes maior que aquela do estimador de ?. Igualmente, valores medidos de correlação espacial apresentaram bom ajuste aos modelos teóricos, em termos de uma tendência geral de variação. Em particular, curvas de distribuição cumulativa do tempo de coerência, , para medidas em campo em 3500MHz, mostraram que é maior que 1,7 ms, para 90% do tempo, quando o receptor se move a 30 km/h. Por fim, medidas em campo da largura de banda de coerência, em 1800MHz, revelaram que um valor de ?f < 60 kHz irá garantir um nível de correlação da envoltória maior que 0,9, para 90% do tempo
Abstract: The propagation environment effects on the transmitted signal as well as the conditions imposed by the receiver mobility directly affect the quality of service (QoS) in wireless communication systems. Therefore, it is necessary to understand and analyze the degradation effects inflicted by the channel on a given data communication system, in order to evaluate the measures to mitigate these deleterious effects. In this thesis, we present a statistical characterization of the mobile radio channel based on field measurements performed over the 1800, 2500, and 3500 MHz bands, using a simple technique for acquiring the signal envelope. In particular, envelope statistics for probability distribution functions were calculated, as well as the crossing rates, duration of fading and its distribution, spatial and frequency correlation functions, coherence time, and coherence bandwidth. A comparative analysis of these results was also carried out against the following statistical models: Rayleigh, Nakagami, Rice, Weibull, Hoyt (Nakagami-q), and ?-?. Also, emphasis is given to the parameter estimation of radio channel models using two methods: moments (MoM) and maximum likelihood (ML). In this context, expressions for the asymptotic variance and confidence interval of ML estimators were obtained, based on the Fisher information a random sample contains over the parameter to be estimated. In general, there was a good fit between the field measurements and corresponding theoretical curves for envelope statistics of first and second order. Field measurements of this work have shown that ML estimators grouped more the theoretical curves around the experimental one, when compared to MoM estimators. Additionally, the covariance matrix of ML estimators for ? and ?, obtained from field measurements, showed that the variance of ? estimator is at least ten times greater than the one of ? estimator. Moreover, measured values of spatial correlation showed a good .t to the theoretical models, in terms of a general tendency of variation. Particularly, cumulative distribution curves of the coherence time , for field measurements at 3500MHz, showed that is greater than 1,7 ms for 90% of time when the receiver is moving at 30 km/h. Finally, 1800- MHz field measurements of coherence bandwidth revealed that a value of ?f < 60 kHz will ensure a level of envelope correlation greater than 0.9 for 90% of time
Doutorado
Telecomunicações e Telemática
Doutor em Engenharia Elétrica
Style APA, Harvard, Vancouver, ISO itp.
37

Florez, Guillermo Domingo Martinez. "Extensões do modelo -potência". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-07072011-154259/.

Pełny tekst źródła
Streszczenie:
Em analise de dados que apresentam certo grau de assimetria a suposicao que as observações seguem uma distribuição normal, pode resultar ser uma suposição irreal e a aplicação deste modelo pode ocultar características importantes do modelo verdadeiro. Este tipo de situação deu forca á aplicação de modelo assimétricos, destacando-se entre estes a família de distribuições skew-symmetric, desenvolvida por Azzalini (1985). Neste trabalho nos apresentamos uma segunda proposta para a anàlise de dados com presença importante de assimetria e/ou curtose, comparado com a distribuição normal. Nós apresentamos e estudamos algumas propriedades dos modelos alfa-potência e log-alfa-potência, onde também estudamos o problema de estimação, as matrizes de informação observada e esperada de Fisher e o grau do viés dos estimadores mediante alguns processos de simulação. Nós introduzimos um modelo mais estável que o modelo alfa- potência do qual derivamos o caso bimodal desta distribuição e introduzimos os modelos bimodal simêtrico e assimêtrico alfa-potencia. Posteriormente nós estendemos a distribuição alfa-potência para o caso do modelo Birnbaum-Saunders, estudamos as propriedades deste novo modelo, desenvolvemos estimadores para os parametros e propomos estimadores com viés corrigido. Também introduzimos o modelo de regressão alfa-potência para dados censurados e não censurados e para o modelo de regressão log-linear Birnbaum-Saunders; aqui nós derivamos os estimadores dos parâmetros e estudamos algumas técnicas de validação dos modelos. Por ultimo nós fazemos a extensão multivariada do modelo alfa-potência e estudamos alguns processos de estimação dos parâmetros. Para todos os casos estudados apresentam-se ilustrações com dados já analisados previamente com outras suposições de distribuições.
In data analysis where data present certain degree of asymmetry the assunption of normality can result in an unreal situation and the application of this model can hide important caracteristics of the true model. Situations of this type has given strength to the use of asymmetric models with special emphasis on the skew-symmetric distribution developed by Azzalini (1985). In this work we present an alternative for data analysis in the presence of signi¯cant asymmetry or kurtosis, when compared with the normal distribution, as well as other situations that involve such model. We present and study of the properties of the ®-power and log-®-power distributions, where we also study the estimation problem, the observed and expected information matrices and the degree of bias in estimation using simulation procedures. A °exible model version is proposed for the ®-power distribution, following an extension to a bimodal version. Follows next an extension of the Birnbaum-Saunders distribution using the ®-power distribution, where some properties are studied, estimating approaches are developed as well as corrected bias estimator developed. We also develop censored and uncensored regression for the ®-power model and for the log-linear Birnbaum-Saunders regression models, for which model validation techniques are studied. Finally a multivariate extension of the ®-power model is proposed and some estimation procedures are investigated for the model. All the situations investigated were illustrated with data application using data sets previally analysed with other distributions.
Style APA, Harvard, Vancouver, ISO itp.
38

Top, Alioune. "Estimation paramétriques et tests d'hypothèses pour des modèles avec plusieurs ruptures d'un processus de poisson". Thesis, Le Mans, 2016. http://www.theses.fr/2016LEMA1014/document.

Pełny tekst źródła
Streszczenie:
Ce travail est consacré aux problèmes d’estimation paramétriques, aux tests d’hypothèses et aux tests d’ajustement pour les processus de Poisson non homogènes.Tout d’abord on a étudié deux modèles ayant chacun deux sauts localisés par un paramètre inconnu. Pour le premier modèle la somme des sauts est positive. Tandis que le second a un changement de régime et constant par morceaux. La somme de ses deux sauts est nulle. Ainsi pour chacun de ces modèles nous avons étudié les propriétés asymptotiques de l’estimateur bayésien (EB) et celui du maximum de vraisemblance(EMV). Nous avons montré la consistance, la convergence en distribution et la convergence des moments. En particulier l’estimateur bayésien est asymptotiquement efficace. Pour le second modèle nous avons aussi considéré le test d’une hypothèse simple contre une alternative unilatérale et nous avons décrit les propriétés asymptotiques (choix du seuil et puissance ) du test de Wald (WT)et du test du rapport de vraisemblance généralisé (GRLT).Les démonstrations sont basées sur la méthode d’Ibragimov et Khasminskii. Cette dernière repose sur la convergence faible du rapport de vraisemblance normalisé dans l’espace de Skorohod sous certains critères de tension des familles demesure correspondantes.Par des simulations numériques, les variances limites nous ont permis de conclure que l’EB est meilleur que celui du EMV. Lorsque la somme des sauts est nulle, nous avons développé une approche numérique pour le EMV.Ensuite on a considéré le problème de construction d’un test d’ajustement pour un modèle avec un paramètre d’échelle. On a montré que dans ce cas, le test de Cramer-von Mises est asymptotiquement ”parameter-free” et est consistent
This work is devoted to the parametric estimation, hypothesis testing and goodnessof-fit test problems for non homogenous Poisson processes. First we consider two models having two jumps located by an unknown parameter.For the first model the sum of jumps is positive. The second is a model of switching intensity, piecewise constant and the sum of jumps is zero. Thus, for each model, we studied the asymptotic properties of the Bayesian estimator (BE) andthe likelihood estimator (MLE). The consistency, the convergence in distribution and the convergence of moments are shown. In particular we show that the BE is asymptotically efficient. For the second model we also consider the problem of asimple hypothesis testing against a one- sided alternative. The asymptotic properties (choice of the threshold and power) of Wald test (WT) and the generalized likelihood ratio test (GRLT) are described.For the proofs we use the method of Ibragimov and Khasminskii. This method is based on the weak convergence of the normalized likelihood ratio in the Skorohod space under some tightness criterion of the corresponding families of measure.By numerical simulations, the limiting variances of estimators allows us to conclude that the BE outperforms the MLE. In the situation where the sum of jumps is zero, we developed a numerical approach to obtain the MLE.Then we consider the problem of construction of goodness-of-test for a model with scale parameter. We show that the Cram´er-von Mises type test is asymptotically parameter-free. It is also consistent
Style APA, Harvard, Vancouver, ISO itp.
39

Sun, Xusheng. "Optimal distributed detection and estimation in static and mobile wireless sensor networks". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44825.

Pełny tekst źródła
Streszczenie:
This dissertation develops optimal algorithms for distributed detection and estimation in static and mobile sensor networks. In distributed detection or estimation scenarios in clustered wireless sensor networks, sensor motes observe their local environment, make decisions or quantize these observations into local estimates of finite length, and send/relay them to a Cluster-Head (CH). For event detection tasks that are subject to both measurement errors and communication errors, we develop an algorithm that combines a Maximum a Posteriori (MAP) approach for local and global decisions with low-complexity channel codes and processing algorithms. For event estimation tasks that are subject to measurement errors, quantization errors and communication errors, we develop an algorithm that uses dithered quantization and channel compensation to ensure that each mote's local estimate received by the CH is unbiased and then lets the CH fuse these estimates into a global one using a Best Linear Unbiased Estimator (BLUE). We then determine both the minimum energy required for the network to produce an estimate with a prescribed error variance and show how this energy must be allocated amongst the motes in the network. In mobile wireless sensor networks, the mobility model governing each node will affect the detection accuracy at the CH and the energy consumption to achieve this level of accuracy. Correlated Random Walks (CRWs) have been proposed as mobility models that accounts for time dependency, geographical restrictions and nonzero drift. Hence, the solution to the continuous-time, 1-D, finite state space CRW is provided and its statistical behavior is studied both analytically and numerically. The impact of the motion of sensor on the network's performance is also studied.
Style APA, Harvard, Vancouver, ISO itp.
40

Grafström, Anton. "On unequal probability sampling designs". Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33701.

Pełny tekst źródła
Streszczenie:
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
Style APA, Harvard, Vancouver, ISO itp.
41

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models". Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15985/1/Simon_Sando_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
Style APA, Harvard, Vancouver, ISO itp.
42

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models". Queensland University of Technology, 2004. http://eprints.qut.edu.au/15985/.

Pełny tekst źródła
Streszczenie:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
Style APA, Harvard, Vancouver, ISO itp.
43

Wennerström, Carl-Ludvig, i Dennis Bäckdahl. "Att investera i toppen av en högkonjunktur : Ett fenomen i svensk börshistoria". Thesis, Linköping University, Department of Management and Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12382.

Pełny tekst źródła
Streszczenie:

Bakgrund: Åren 86-97 kännetecknas som en period med flera stora reformer och en svensk konjunktur som nådde sin botten med tre år i följd av negativ BNP-tillväxt. Påtagligt var även reaktionen från Stockholmsbörsen som i samband med lågkonjunkturen upplevde en kraftig nedgång. Vad drev då denna avkastningsutveckling, vinsterna eller värderingarna av dessa? Hur såg sambandet ut mellan konjunktur, bolagsvinster, vinstvärderingar och börsutveckling för perioden?

Syfte: Syftet med denna uppsats, på uppdrag av Melker Schörling AB, är att studera avkastningsutveckling, bolagsvinster och P/E-multiplar över en konjunkturcykel för att analysera till vilken grad multipelexpansion/kontraktion kontra vinsttillväxt drivit avkastningen för olika branscher på Stockholmsbörsen.

I ett andra skede utreds huruvida prognoser för P/E-tal och branschvinster på Stockholmsbörsen korrelerat med konjunkturen samt även hur EBITDA- och vinstmarginaler inverkat på aktievärderingar under tidsperioden. Utifrån studiens resultat kommer eventuella lärdomar kopplas till dagens konjunkturella situation.

Genomförande: Insamlat datamaterial i form av siffror och nyckeltal utgår från Affärsvärldens tidsskrifter och årsböcker med början 1986 och slut 1997. Utifrån dessa har, för studien, relevanta beräkningar dessutom gjorts.

Resultat: Studien av Stockholmsbörsen 86-97, där handelsbranschen genomled konjunkturnedgången bäst, visar inte på att konjunkturen spelar roll för avkastningsutvecklingen. Vinstprognoserna drev avkastningen under lågkonjunkturen medan vinstvärderingarna dämpade nedgången. Genomgående ökade vinstvärderingarna under lågkonjunkturen till följd av att vinstprognoserna föll mer än kursen. Studien visar på att dessa ökade vinstvärderingar innehöll överskattade vinstförväntningar. Innan börsnedgången befann sig P/E-talen på relativt låga nivåer och när utväxlingen i samband med lågkonjunkturens slut skedde var P/E-talen höga, vilket ifrågasätter huruvida P/E-talet egentligen är representativt under en lågkonjunktur samt dess förmåga att indikera på risk. Prognostiserat P/E-tal korrelerar väl med faktiskt P/E-tal men det faktiska fluktuerar i större grad. Marginalerna, som korrelerar negativt med vinstvärderingarna, uppvisar en laggningseffekt gentemot omsättningen.


Background: The years 86-97 are characterized as a period with many big reformations when the Swedish economy reached its bottom with three years in a row with negative GDP. The reaction from the Swedish stock market was substantial and Stockholmsbörsen went through a heavy bearish period. What was it that drove this stock return, the expected earnings or the valuation of them? What was the connection between the business cycle, earnings, valuations and stock return for this particular period?

Aim: The aim of the thesis, on behalf of Melker Schörling AB, is to study stock return, company earnings and price-earnings ratios during a business cycle in order to analyse to what extent multiple expansion/contraction versus earnings growth have driven stock return for the different branches on Stockholmsbörsen.

In a second stage we observe how estimates of branches’ price-earnings ratios and earnings correlate with the business cycle and what impact EBITDA and pre-tax profit margin have on valuation during the period. Based on the result of the thesis, contingent knowledge will be related to today’s economic situation.

Completion: The data, consisting of figures and ratios, is collected from magazines and yearbooks of Affärsvärlden starting 1986 and ending 1997. With the help of these, relevant calculations have been made.

Result: This study of Stockholmsbörsen during the years 86-97, where the consumer-goods index had the best performance, shows that the business cycle has no impact on the stock return. The earnings estimates drove the stock return during the economic slump of 91-93 while the valuations tempered the fall. Through the economic slump the valuations became higher due to the fact that the earnings estimates fell more than the stock return. The study also shows that the increased valuations consisted of overestimated earnings estimates. Before the stock market fell the price-earnings ratios were at relatively low levels and when bull period begun in the end of the economic slump the ratios were high. This fact questions whether the price-earnings ratio is representative during an economic slump and if the ratio indicates risk accurately. Forward PE correlates positively with current PE, but the current PE is more volatile. Margins, which correlate negatively with valuations, indicate a lagging effect towards sales growth.

Style APA, Harvard, Vancouver, ISO itp.
44

Jeřábková, Věra. "Možnosti měření efektivnosti systému financování a poskytování sociálních služeb". Doctoral thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-125219.

Pełny tekst źródła
Streszczenie:
A significant problem of functioning the public sector is to ensure the efficiency of public expenditures. Efficiency model's creating is a dynamic process that requires regular evaluation of the socioeconomic conditions. The efficiency of financial system and granting social services is still not measured in the Czech Republic. The area of social services is affected by number of criteria and factors. So it is necessary to apply different views on evaluating the efficiency of social care, social prevention and social counseling. The efficiency of social services can not be estimated as a whole, but we should focus on different groups of social services and individual services within the group. In this thesis the efficiency is measured in the two selected residental social care services -- homes for the elderly and homes for people with disabilities. The applied methods was used only to a limited extent because of lacks of many important variables in the area of official statics. In the case of multiple criteria methods the calculation was also impeded by the availability of suitable software.
Style APA, Harvard, Vancouver, ISO itp.
45

Jimenez, Guizar Arturo Mauricio. "Communications coopératives dans les réseaux autour du corps humain pour la capture du mouvement". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI091/document.

Pełny tekst źródła
Streszczenie:
Les réseaux corporels (WBAN) se réfère aux réseaux de capteurs (WSN) "portables" utilisés pour collecter des données personnelles, telles que la fréquence cardiaque ou l'activité humaine. Cette thèse a pour objectif de proposer des algorithmes coopératifs (PHY/MAC) pour effectuer des applications de localisation, tels que la capture de mouvement et la navigation de groupe. Pour cela, nous exploitons les avantages du WBAN avec différentes topologies et différents types de liens: on-body à l'échelle du corps, body-to-body entre les utilisateurs et off-body par rapport à l'infrastructure. La transmission repose sur une radio impulsionnelle (IR-UWB), afin d'obtenir des mesures de distance précises, basées sur l’estimation du temps d'arrivée (TOA). Ainsi, on s’intéresse au problème du positionnement à travers de la conception de stratégies coopératives et en considérant la mobilité du corps et les variations canal. Notre première contribution consiste en la création d'une base de données obtenue avec de scénarios réalistes pour la modélisation de la mobilité et du canal. Ensuite, nous introduisons un simulateur capable d'exploiter nos mesures pour la conception de protocoles. Grâce à ces outils, nous étudions d’abord l'impact de la mobilité et des variations de canal sur l'estimation de la distance avec le protocole "three way-ranging" (3-WR). Ainsi, nous quantifions et comparons l'erreur avec des modèles statistiques. Dans un second temps, nous analysons différentes algorithmes de gestion de ressources pour réduire l'impact de la mobilité sur l'estimation de position. Ensuite, nous proposons une optimisation avec un filtre de Kalman étendu (EKF) pour réduire l'erreur. Enfin, nous proposons un algorithme coopératif basé sur l'analyse d’estimateurs de qualité de lien (LQEs) pour améliorer la fiabilité. Pour cela, nous évaluons le taux de succès de positionnement en utilisant trois modèles de canaux (empirique, simulé et expérimental) avec un algorithme (basé sur la théorie des jeux) pour le choix des ancres virtuelles
Wireless Body Area Networks (WBAN) refers to the family of “wearable” wireless sensor networks (WSN) used to collect personal data, such as human activity, heart rate, sleep sequences or geographical position. This thesis aims at proposing cooperative algorithms and cross-layer mechanisms with WBAN to perform large-scale individual motion capture and coordinated group navigation applications. For this purpose, we exploit the advantages of jointly cooperative and heterogeneous WBAN under full/half-mesh topologies for localization purposes, from on-body links at the body scale, body-to-body links between mobile users of a group and off-body links with respect to the environment and the infrastructure. The wireless transmission relies on an impulse radio Ultra-Wideband (IR-UWB) radio (based on the IEEE 802.15.6 standard), in order to obtain accurate peer-to-peer ranging measurements based on Time of Arrival (ToA) estimates. Thus, we address the problem of positioning and ranging estimation through the design of cross-layer strategies by considering realistic body mobility and channel variations. Our first contribution consists in the creation of an unprecedented WBAN measurement database obtained with real experimental scenarios for mobility and channel modelling. Then, we introduce a discrete-event (WSNet) and deterministic (PyLayers) co-simulator tool able to exploit our measurement database to help us on the design and validation of cooperative algorithms. Using these tools, we investigate the impact of nodes mobility and channel variations on the ranging estimation. In particular, we study the “three-way ranging” (3-WR) protocol and we observed that the delays of 3-WR packets have an impact on the distances estimated in function of the speed of nodes. Then, we quantify and compare the error with statistical models and we show that the error generated by the channel is bigger than the mobility error. In a second time, we extend our study for the position estimation. Thus, we analyze different strategies at MAC layer through scheduling and slot allocation algorithms to reduce the impact of mobility. Then, we propose to optimize our positioning algorithm with an extended Kalman filter (EKF), by using our scheduling strategies and the statistical models of mobility and channel errors. Finally, we propose a distributed-cooperative algorithm based on the analysis of long-term and short-term link quality estimators (LQEs) to improve the reliability of positioning. To do so, we evaluate the positioning success rate under three different channel models (empirical, simulated and experimental) along with a conditional algorithm (based on game theory) for virtual anchor choice. We show that our algorithm improve the number of positions estimated for the nodes with the worst localization performance
Style APA, Harvard, Vancouver, ISO itp.
46

Gomes, André Yoshizumi. "Família Weibull de razão de chances na presença de covariáveis". Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/4558.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-06-02T20:06:06Z (GMT). No. of bitstreams: 1 4331.pdf: 1908865 bytes, checksum: d564b46a6111fdca6f7cc9f4d5596637 (MD5) Previous issue date: 2009-03-18
Universidade Federal de Minas Gerais
The Weibull distribuition is a common initial choice for modeling data with monotone hazard rates. However, such distribution fails to provide a reasonable parametric _t when the hazard function is unimodal or bathtub-shaped. In this context, Cooray (2006) proposed a generalization of the Weibull family by considering the distributions of the odds of Weibull and inverse Weibull families, referred as the odd Weibull family which is not just useful for modeling unimodal and bathtub-shaped hazards, but it is also convenient for testing goodness-of-_t of Weibull and inverse Weibull as submodels. In this project we have systematically studied the odd Weibull family along with its properties, showing motivations for its utilization, inserting covariates in the model, pointing out some troubles associated with the maximum likelihood estimation and proposing interval estimation and hypothesis test construction methodologies for the model parameters. We have also compared resampling results with asymptotic ones. Coverage probability from proposed con_dence intervals and size and power of considered hypothesis tests were both analyzed as well via Monte Carlo simulation. Furthermore, we have proposed a Bayesian estimation methodology for the model parameters based in Monte Carlo Markov Chain (MCMC) simulation techniques.
A distribuição Weibull é uma escolha inicial freqüente para modelagem de dados com taxas de risco monótonas. Entretanto, esta distribuição não fornece um ajuste paramétrico razoável quando as funções de risco assumem um formato unimodal ou em forma de banheira. Neste contexto, Cooray (2006) propôs uma generalização da família Weibull considerando a distribuição da razão de chances das famílias Weibull e Weibull inversa, referida como família Weibull de razão de chances. Esta família não é apenas conveniente para modelar taxas de risco unimodal e banheira, mas também é adequada para testar a adequabilidade do ajuste das famílias Weibull e Weibull inversa como submodelos. Neste trabalho, estudamos sistematicamente a família Weibull de razão de chances e suas propriedades, apontando as motivações para o seu uso, inserindo covariáveis no modelo, veri_cando as di_culdades referentes ao problema da estimação de máxima verossimilhança dos parâmetros do modelo e propondo metodologia de estimação intervalar e construção de testes de hipóteses para os parâmetros do modelo. Comparamos os resultados obtidos por meio dos métodos de reamostragem com os resultados obtidos via teoria assintótica. Tanto a probabilidade de cobertura dos intervalos de con_ança propostos quanto o tamanho e poder dos testes de hipóteses considerados foram estudados via simulação de Monte Carlo. Além disso, propusemos uma metodologia Bayesiana de estimação para os parâmetros do modelo baseados em técnicas de simulação de Monte Carlo via Cadeias de Markov.
Style APA, Harvard, Vancouver, ISO itp.
47

Vestin, Albin, i Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms". Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Pełny tekst źródła
Streszczenie:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
Style APA, Harvard, Vancouver, ISO itp.
48

HUANG, YAU-YI, i 黃耀億. "A Comparison of Multivariate Ratio Estimators". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/19315467521060165397.

Pełny tekst źródła
Streszczenie:
碩士
國立臺北大學
統計學系
95
This paper aims to compare the multivariate ratio estimators based upon a Monte Carlo approach. The multivariate ratio estimators explored in this paper are derived from univariate ratio estimators which are summarized from previous studies. Except traditional and Hartley & Ross multivariate ratio estimators proposed by Olkin, no other univariate ratio estimators have been extended to multivariate type. Therefore, in this paper following the Olkin’s concept of expanding univariate ratio estimator to multivariate ratio estimator, the multivariate ratio estimators and their variances are derived and extended from the corresponding univariate ratio estimators which are summarized from previous studied. Using Monte Carlo approach the efficiency of the proposed multivariate ratio estimators are then compared based upon bias, variance, and MSE. The simulation results show that all the other ratio estimators have smaller bias than the traditional ratio estimator for estimating the population total under both of the univariate or multivariate type. The simulation results also find that the bias can be reduced as sample size increased and the variance of ratio estimators are smaller than variance of the mean per unit for estimating population total. That implies that we can reduce the variance of estimator and increase estimation efficiency by increasing sample size or increasing number of groups.
Style APA, Harvard, Vancouver, ISO itp.
49

曾巧惠. "Comparison of Several Hazard Ratio Estimators with Interval Censored Data". Thesis, 1999. http://ndltd.ncl.edu.tw/handle/39908356288957578203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Chen, Yi-Chen, i 陳怡真. "Truncated Power Series Estimators forOdds, Odds Ratio, Relative Risk and Log Odds". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/76514546727019299371.

Pełny tekst źródła
Streszczenie:
碩士
中原大學
應用數學研究所
98
Analysis of binary data in a longitudinal study has been an important statistical issue. Thus odds, odds ratio, relative risk, and log odds have a significant effect upon the binary variable. According to Lehmann in 1983, the unbiased estimator of the inverse of proportion is nonexistent but we can find a estimative value approximated to unbiased, and therefore use Conventional Point Estimators ( CE ) and Truncated Power Series Estimators ( TPSE ) to estimate for odds, odds ratio, relative risk, and log odds. Proceed to the next step, we use statistical software-R to simulate them, then we can obtain the data of Bias and MSE to compare by CE and TPSE, and then we can judge pros and cons of the two estimative methods by MSE. Finally, we can realize the conclusion that the TPSE for odds, odds ratio, relative risk, and log odds are the best efficient estimators.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii