Teses / dissertações sobre o tema "Data of variable size"

Siga este link para ver outros tipos de publicações sobre o tema: Data of variable size.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Data of variable size".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Chen, Haiying. "Ranked set sampling for binary and ordered categorical variables with applications in health survey data". Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092770729.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiii, 109 p.; also includes graphics Includes bibliographical references (p. 99-102). Available online via OhioLINK's ETD Center
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liv, Per. "Efficient strategies for collecting posture data using observation and direct measurement". Doctoral thesis, Umeå universitet, Yrkes- och miljömedicin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59132.

Texto completo da fonte
Resumo:
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Högberg, Hans. "Some properties of measures of disagreement and disorder in paired ordinal data". Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro universitet, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12350.

Texto completo da fonte
Resumo:
The measures studied in this thesis were a measure of disorder, D, and a measure of the individual part of the disagreement, the measure of relative rank variance, RV, proposed by Svensson in 1993. The measure of disorder is a useful measure of order consistency in paired assessments of scales with a different number of possible values. The measure of relative rank variance is a useful measure in evaluating reliability and for evaluating change in qualitative outcome variables. In Paper I an overview of methods used in the analysis of dependent ordinal data and a comparison of the methods regarding the assumptions, specifications, applicability, and implications for use were made. In Paper II an application, and a comparison of the results of some standard models, tests, and measures to two different research problems were made. The sampling distribution of the measure of disorder was studied both analytically and by a simulation experiment in Paper III. The asymptotic normal distribution was shown by the theory of U-statistics and the simulation experiments for finite sample sizes and various amount of disorder showed that the sampling distribution was approximately normal for sample sizes of about 40 to 60 for moderate sizes of D and for smaller sample sizes for substantial sizes of D. The sampling distribution of the relative rank variance was studied in a simulation experiment in Paper IV. The simulation experiment showed that the sampling distribution was approximately normal for sample sizes of 60-100 for moderate size of RV, and for smaller sample sizes for substantial size of RV. In Paper V a procedure for inference regarding relative rank variances from two or more samples was proposed. Pair-wise comparison by jackknife technique for variance estimation and the use of normal distribution as approximation in inference for parameters in independent samples based on the results in Paper IV were demonstrated. Moreover, an application of Kruskal-Wallis test for independent samples and Friedman’s test for dependent samples were conducted.
Statistical methods for ordinal data
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Fakhouri, Elie Michel. "Variable block-size motion estimation". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37260.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ruengvirayudh, Pornchanok. "A Monte Carlo Study of Parallel Analysis, Minimum Average Partial, Indicator Function, and Modified Average Roots for Determining the Number of Dimensions with Binary Variables in Test Data: Impact of Sample Size and Factor Structure". Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou151516919677091.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Nataša, Krklec Jerinkić. "Line search methods with variable sample size". Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. http://dx.doi.org/10.2298/NS20140117KRKLEC.

Texto completo da fonte
Resumo:
The problem under consideration is an unconstrained optimization problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective  function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.The main goal of this thesis is to develop the clas of methods that can decrease the cost of an algorithm by decreasing the number of function evaluations. The idea is to decrease the sample size whenever it seems to be reasonable - roughly speaking, we do not want to impose a large precision, i.e. a large sample size when we are far away from the solution we search for. The detailed description of the new methods is presented in Chapter 4 together with the convergence analysis. It is shown that the approximate solution is of the same quality as the one obtained by dealing with the full sample from the start.Another important characteristic of the methods that are proposed here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search along it until we obtain a sucient decrease in the  function value. The sucient decrease is determined throughout the line search rule. In Chapter 4, that rule is supposed to be monotone, i.e. we are imposing strict decrease of the function value. In order to decrease the cost of the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented. In Chapter 6, numerical results are presented. The test problems are various - some of them are academic and some of them are real world problems. The academic problems are here to give us more insight into the behavior of the algorithms. On the other hand, data that comes from the real world problems are here to test the real applicability of the proposed algorithms. In the rst part of that chapter, the focus is on the variable sample size techniques. Different implementations of the proposed algorithm are compared to each other and to the other sample schemes as well. The second part is mostly devoted to the comparison of the various line search rules combined with dierent search directions in the variable sample size framework. The overall numerical results show that using the variable sample size can improve the performance of the algorithms signicantly, especially when the nonmonotone line search rules are used.The rst chapter of this thesis provides the background material for the subsequent chapters. In Chapter 2, basics of the nonlinear optimization are presented and the focus is on the line search, while Chapter 3 deals with the stochastic framework. These chapters are here to provide the review of the relevant known results, while the rest of the thesis represents the original contribution. 
U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju  moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki trošak i uobičajeno je da se ukupan trošak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne trošskove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti  (velikog uzorka) kada smo daleko od rešsenja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija rešenja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.Još jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vršsi pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono što znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja troškova optimizacije kao i proširenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.Numerički rezultati su predstavljeni u šestom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u ponašanje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim šemama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosadašnjih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hintze, Christopher Jerry. "Modeling correlation in binary count data with application to fragile site identification". Texas A&M University, 2005. http://hdl.handle.net/1969.1/4278.

Texto completo da fonte
Resumo:
Available fragile site identification software packages (FSM and FSM3) assume that all chromosomal breaks occur independently. However, under a Mendelian model of inheritance, homozygosity at fragile loci implies pairwise correlation between homologous sites. We construct correlation models for chromosomal breakage data in situations where either partitioned break count totals (per-site single-break and doublebreak totals) are known or only overall break count totals are known. We derive a likelihood ratio test and Neyman’s C( α) test for correlation between homologs when partitioned break count totals are known and outline a likelihood ratio test for correlation using only break count totals. Our simulation studies indicate that the C( α) test using partitioned break count totals outperforms the other two tests for correlation in terms of both power and level. These studies further suggest that the power for detecting correlation is low when only break count totals are reported. Results of the C( α) test for correlation applied to chromosomal breakage data from 14 human subjects indicate that detection of correlation between homologous fragile sites is problematic due to sparseness of breakage data. Simulation studies of the FSM and FSM3 algorithms using parameter values typical for fragile site data demonstrate that neither algorithm is significantly affected by fragile site correlation. Comparison of simulated fragile site misclassification rates in the presence of zero-breakage data supports previous studies (Olmsted 1999) that suggested FSM has lower false-negative rates and FSM3 has lower false-positive rates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sodagari, Shabnam. "Variable block-size disparity estimation in stereo imagery". Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26399.

Texto completo da fonte
Resumo:
This thesis addresses the problem of developing and implementing in software a variable block size (quadtree splitting) disparity estimation algorithm that is optimized for use in compression of stereo image pairs and studying its performance over a range of rate/distortion values for a variety of images. First the constrained optimization problem is converted to an unconstrained one using the Lagrange multiplier approach. Then by solving the optimization problem using dynamic programming, the optimal variables representing the optimal quadtree structure and the quantizer for each node are determined. The experimental results show the improvements of this method over simple intraframe JPEG coding and over fixed block-size disparity estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Dziminski, Martin A. "The evolution of variable offspring provisioning". University of Western Australia, 2005. http://theses.library.uwa.edu.au/adt-WU2005.0134.

Texto completo da fonte
Resumo:
Most theoretical models predict an optimal offspring size that maximises parental fitness. Variation in the quality of the offspring environment can result in multiple offspring size optima and therefore variation of offspring provisioning can evolve. Variation in offspring provisioning is common and found across a variety of taxa. It can be defined as between populations, explained by optimality models, or between and within individuals, neither so easily explained by optimality models. My research focused on the evolution of variable offspring provisioning by testing theoretical models relating to variation in offspring provisioning between and within individuals. Using comparative methods, I found a positive relationship between intraclutch variation in offspring provisioning and variation in the quality of the offspring environment in a suite of pond breeding frogs. This positive relationship provided evidence that patterns of variable offspring provisioning are related to the offspring environment. This study also identified a species (Crinia georgiana) with high variation in offspring provisioning on which to focus further investigations. High variation in offspring provisioning occured between and within individuals of this species independent of female phenotype and a trade-off in offspring size and number existed. In laboratory studies, increased yolk per offspring led to increased fitness per offspring. Parental fitness calculations revealed that in high quality conditions production of small more numerous offspring resulted in higher parental fitness, but in lower quality conditions the production of large offspring resulted in the highest parental fitness. This was confirmed in field experiments under natural conditions using molecular markers to trace offspring to clutches of known provisioning, allowing me to measure exact parental fitness. The strategy of high variation in offspring size within clutches can be of benefit when the future of the offspring environment is not known to the parents: as a form of bet-hedging. Further study of the offspring environment revealed that conditions such as density dependent fitness loss, spatial variation in habitat quality, and non-random offspring dispersal, can combine to create the conditions predicted by theoretical models to maintain a strategy of variable offspring provisioning in the population. My research provides a comprehensive empirical test of the theory of variable offspring provisioning
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Acuna, Stamp Annabelen. "Design Study for Variable Data Printing". University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962378632.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Shomper, Keith A. "Visualizing program variable data for debugging /". The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487848531364488.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Wien, Mathias [Verfasser]. "Variable Block-Size Transforms for Hybrid Video Coding / Mathias Wien". Aachen : Shaker, 2004. http://d-nb.info/1172614245/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Dominicus, Annica. "Latent variable models for longitudinal twin data". Doctoral thesis, Stockholm : Mathematical statistics, Dept. of mathematics, Stockholm university, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-848.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Albanese, Maria Teresinha. "Latent variable models for binary response data". Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/1220/.

Texto completo da fonte
Resumo:
Most of the results in this thesis are obtained for the logit/probit model for binary response data given by Bartholomew (1980), which is sometimes called the two-parameter logistic model. In most the cases the results also hold for other common binary response models. By profiling and an approximation, we investigate the behaviour of the likelihood function, to see if it is suitable for ML estimation. Particular attention is given to the shape of the likelihood around the maximum point in order to see whether the information matrix will give a good guide to the variability of the estimates. The adequacy of the asymptotic variance-covariance matrix is inwestigated through jackknife and bootstrap techniques. We obtain the marginal ML estimators for the Rasch model and compare them with those obtained from conditional ML estimation. We also test the fit of the Rasch model against a logit/probit model with a likelihood ratio test, and investigate the behaviour of the likelihood function for the Rasch model and its bootstrap estimates together with approximate methods. For both fixed and decreasing sample size, we investigate the stability of the discrimination parameter estimates ai, 1 when the number of items is reduced. We study the conditions which give rise to large discrimination parameter estimates. This leads to a method for the generation of a (p+1)th item with any fixed ap+1,1 and ap+1,0. In practice it is importante to measure the latent variable and this is usually done by using the posterior mean or the component scores. We give some theoretical and applied results for the relation between the linearity of the plot of the posterior mean latent variable values, the component scores and the normality of those posterior distributions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Yacoub, Francois MacGregor John Frederick. "Learning from data using latent variable methods". *McMaster only, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

McClelland, Robyn L. "Regression based variable clustering for data reduction /". Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9611.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Huang, Shiping. "Exploratory visualization of data with variable quality". Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-225546/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

PERRA, SILVIA. "Objective bayesian variable selection for censored data". Doctoral thesis, Università degli Studi di Cagliari, 2013. http://hdl.handle.net/11584/266108.

Texto completo da fonte
Resumo:
In this thesis we study the problem of selecting a set of regressors when the response variable follows a parametric model (such as Weibull or lognormal) and observations are right censored. Under a Bayesian approach, the most widely used tools are the Bayes Factors (BFs) which are, however, undefined when using improper priors. Some commonly used tools in literature, which solve the problem of indeterminacy in model selection, are the Intrinsic Bayes factor (IBF) and the Fractional Bayes factor (FBF). The two proposals are not actual Bayes factors but it can be shown that they asymptotically tend to actual BFs calculated over particular priors called intrinsic and fractional priors, respectively. Each of them depends on the size of a minimal training sample (MTS) and, in particular, the IBF also depends on the MTSs used. When working with censored data, it is not immediate to define a suitable MTS because the sample space of response variables must be fully explored when drawing MTSs, but only uncensored data are actually relevant to train the improper prior into a proper posterior. In fact, an unweighted MTS consisting only of uncensored data may produce a serious bias in model selection. In order to overcome this problem, a sequential MTS (SMTS) is used, leading to an increase in the number of possible MTSs as each one has random size. This prevents the use of the IBF for exploring large model spaces. In order to decrease the computational cost, while maintaining a behavior comparable to that of the IBF, we provide a suitable definition of the FBF that gives results similar to the ones of the IBF calculated over the SMTSs. We first define the conditional FBF on a fraction proportional to the MTS size and, then, we show that the marginal FBF (mFBF), obtained by averaging the conditional FBFs with respect to the probability distribution of the fraction, is consistent and provides also good results. Next, we recall the definition of intrinsic prior for the case of the IBF and the definition of the fractional prior for the FBF and we calculate them in the case of the exponential model for right censored data. In general, when the censoring mechanism is unknown, it is not possible to obtain these priors. Also another approach to the choice of the MTS, which consists in weighting the MTS by a suitable set of weights, is presented. In fact, we define the Kaplan-Meier minimal training sample (KMMTS) which depends on the Kaplan-Meier estimator of the survival function and which contains only suitable weighted uncensored observations. This new proposal could be useful when the censoring percentage is not very high, and it allows faster computations when the predictive distributions, calculated only over uncensored observations, can be obtained in closed-form. The new methodologies are validated by means of simulation studies and applications to real data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Moreno, Carlos 1965. "Variable frame size for vector quantization and application to speech coding". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99001.

Texto completo da fonte
Resumo:
Vector Quantization (VQ) is a lossy data compression technique that is often applied in the field of speech communications. In VQ, a group of values or vector is replaced by the closest vector from a list of possible choices, the codebook. Compression is achieved by providing the index corresponding to the closest vector in the codebook, which in general can be represented with less data than the original vector.
In the case of VQ applied to speech signals, the input signal is divided into frames of a given length. Depending on the particular technique being used, the system either extracts a vector representation of the whole frame (usually some form of spectral representation), or applies some processing to the signal and uses the processed frame itself as the vector to be quantized. The two techniques are often combined, and the system uses VQ for the spectral representation of the frame and also for the processed frame.
A typical assumption in this scheme is the fact that the frame size is fixed. This simplifies the scheme and thus reduces the computing-power requirements for a practical implementation.
In this study, we present a modification to this technique that allows for variable size frames, providing an additional degree of freedom for the optimization of the Data Compression process.
The quantization error is minimized by choosing the closest point in the codebook for the given frame. We now minimize this by choosing the frame size that yields the lowest quantization error---notice that the quantization error is a function of the given frame and the codebook; by considering different frame sizes, we get different actual frames that yield different quantization errors, allowing us to choose the optimal size, effectively providing a second level of optimization.
This idea has two caveats; we require additional data to represent the frame, since we have to indicate the size that was used. Also, the complexity of the system increases, since we have to try different frame sizes, requiring more computing-power for a practical implementation of the scheme.
The results of this study show that this technique effectively improves the quality of the compressed signal at a given compression ratio, even if the improvement is not dramatic. Whether or not the increase in complexity is worth the quality improvement for a given application depends entirely on the design constraints for that particular application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Leek, Jeffrey Tullis. "Surrogate variable analysis /". Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9586.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Broc, Camilo. "Variable selection for data aggregated from different sources with group of variable structure". Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3048.

Texto completo da fonte
Resumo:
Durant les dernières décennies, la quantité de données disponibles en génétique a consi-dérablement augmenté. D’une part, une amélioration des technologies de séquençage demolécules a permis de réduire fortement le coût d’extraction du génome humain. D’autrepart, des consortiums internationaux d’institutions ont permis la mise en commun de lacollecte de données sur de larges populations. Cette quantité de données nous permetd’espérer mieux comprendre les mécanismes régissant le fonctionnement de nos cellules.Dans ce contexte, l’épidémiologie génétique est un domaine cherchant à déterminer larelation entre des caractéristiques génétiques et l’apparition d’une maladie. Des méthodesstatistiques spécifiques à ce domaine ont dû être développées, en particulier à cause desdimensions que les données présentent : en génétique, l’information est contenue dans unnombre de variables grand par rapport au nombre d’observations.Dans cette dissertation, deux contributions sont présentées. Le premier projet appeléPIGE (Pathway-Interaction Gene Environment) développe une méthode pour déterminerdes interactions gène-environnement. Le second projet vise à développer une méthode desélection de variables adaptée à l’analyse de données provenant de différentes études etprésentant une structure de groupe de variables.Le document est divisé en six parties. Le premier chapitre met en relief le contexte,d’un point de vue à la fois biologique et mathématique. Le deuxième chapitre présente lesmotivations de ce travail et la mise en œuvre d’études en épidémiologie génétique. Le troi-sième chapitre aborde les questions relatives à l’analyse d’interactions gène-environnementet la première contribution de la thèse y est présentée. Le quatrième chapitre traite desproblématiques de méta-analyses. Le développement d’une nouvelle méthode de réductionde dimension répondant à ces questions y est présenté. Le cinquième chapitre met en avantla pertinence de la méthode dans des cas de pleiotropie. Enfin, le sixième et dernier chapitredresse un bilan du travail présenté et dresse des perspectives pour le futur
During the last decades, the amount of available genetic data on populations has growndrastically. From one side, a refinement of chemical technologies have made possible theextraction of the human genome of individuals at an accessible cost. From the other side,consortia of institutions and laboratories around the world have permitted the collectionof data on a variety of individuals and population. This amount of data raised hope onour ability to understand the deepest mechanisms involved in the functioning of our cells.Notably, genetic epidemiology is a field that studies the relation between the geneticfeatures and the onset of a disease. Specific statistical methods have been necessary forthose analyses, especially due to the dimensions of available data: in genetics, informationis contained in a high number of variables compared to the number of observations.In this dissertation, two contributions are presented. The first project called PIGE (Pathway-Interaction Gene Environment) deals with gene-environment interaction assessments.The second one aims at developing variable selection methods for data which has groupstructures in both the variables and the observations.The document is divided into six chapters. The first chapter sets the background of this work,where both biological and mathematical notations and concepts are presented and gives ahistory of the motivation behind genetics and genetic epidemiology. The second chapterpresent an overview of the statistical methods currently in use for genetic epidemiology.The third chapter deals with the identification of gene-environment interactions. It includesa presentation of existing approaches for this problem and a contribution of the thesis. Thefourth chapter brings off the problem of meta-analysis. A definition of the problem and anoverview of the existing approaches are presented. Then, a new approach is introduced.The fifth chapter explains the pleiotropy studies and how the method presented in theprevious chapter is suited for this kind of analysis. The last chapter compiles conclusionsand research lines for the future
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Ahn, Jeongyoun Marron James Stephen. "High dimension, low sample size data analysis". Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,375.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.
Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Faletra, Melissa Kathleen. "Segregation of Particles of Variable Size and Density in Falling Suspension Droplets". ScholarWorks @ UVM, 2014. http://scholarworks.uvm.edu/graddis/265.

Texto completo da fonte
Resumo:
The problem of the falling under gravity suspension droplet was examined for cases where the droplet contains particles with different densities and different sizes. Cases examined include droplets composed of uniform-size particles with two different densities, of uniform-density particles of two different sizes, and of a distribution of particles of different densities. The study was conducted using both simulations based on Oseenlet particle interactions and laboratory experiments. It is observed that when the particles in the suspension droplet have different sizes and densities, an interesting segregation phenomenon occurs in which lighter/smaller particles are transported downward with the droplet and preferentially leave the droplet by entering into the droplet tail, whereas heavier/larger particles remain for longer periods of time in the droplet. When computations are performed with two particle densities or two particle sizes, a point is eventually reached where all of the lighter/smaller particles have been ejected from the droplet, and the droplet continues to fall with only the heavier/larger particles. A simple model explaining three stages of this segregation process is presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Dunlap, Mickey Paul. "Using the bootstrap to analyze variable stars data". Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1398.

Texto completo da fonte
Resumo:
Often in statistics it is of interest to investigate whether or not a trend is significant. Methods for testing such a trend depend on the assumptions of the error terms such as whether the distribution is known and also if the error terms are independent. Likelihood ratio tests may be used if the distribution is known but in some instances one may not want to make such assumptions. In a time series, these errors will not always be independent. In this case, the error terms are often modelled by an autoregressive or moving average process. There are resampling techniques for testing the hypothesis of interest when the error terms are dependent, such as, modelbased bootstrapping and the wild bootstrap, but the error terms need to be whitened. In this dissertation, a bootstrap procedure is used to test the hypothesis of no trend for variable stars when the error structure assumes a particular form. In some cases, the bootstrap to be implemented is preferred over large sample tests in terms of the level of the test. The bootstrap procedure is able to correctly identify the underlying distribution which may not be χ2.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Guo, Lei. "Bayesian Biclustering on Discrete Data: Variable Selection Methods". Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11201.

Texto completo da fonte
Resumo:
Biclustering is a technique for clustering rows and columns of a data matrix simultaneously. Over the past few years, we have seen its applications in biology-related fields, as well as in many data mining projects. As opposed to classical clustering methods, biclustering groups objects that are similar only on a subset of variables. Many biclustering algorithms on continuous data have emerged over the last decade. In this dissertation, we will focus on two Bayesian biclustering algorithms we developed for discrete data, more specifically categorical data and ordinal data.
Statistics
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Harrison, Wendy Jane. "Latent variable modelling for complex observational health data". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16384/.

Texto completo da fonte
Resumo:
Observational health data are a rich resource that present modelling challenges due to data complexity. If inappropriate analytical methods are used to make comparisons amongst either patients or healthcare providers, inaccurate results may generate misleading interpretations that may affect patient care. Traditional approaches cannot fully accommodate the complexity of the data; untenable assumptions may be made, bias may be introduced, or modelling techniques may be crude and lack generality. Latent variable methodologies are proposed to address the data challenges, while answering a range of research questions within a single, overarching framework. Precise model configurations and parameterisations are constructed for each question, and features are utilised that may minimise bias and ensure that covariate relationships are appropriately modelled for correct inference. Fundamental to the approach is the ability to exploit the heterogeneity of the data by partitioning modelling approaches across a hierarchy, thus separating modelling for causal inference and for prediction. In research question (1), data are modelled to determine the association between a health exposure and outcome at the patient level. The latent variable approach provides a better interpretation of the data, while appropriately modelling complex covariate relationships at the patient level. In research questions (2) and (3), data are modelled in order to permit performance comparison at the provider level. Differences in patient characteristics are constrained to be balanced across provider-level latent classes, thus accommodating the ‘casemix’ of patients and ensuring that any differences in patient outcome are instead due to organisational factors that may influence provider performance. Latent variable techniques are thus successfully applied, and can be extended to incorporate patient pathways through the healthcare system, although observational health datasets may not be the most appropriate context within which to develop these methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Williams, Andrea E. Gilbert Juan E. "Usability size N". Auburn, Ala., 2007. http://hdl.handle.net/10415/1386.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Walder, Alistair Neil. "Statistics of shape and size for landmark data". Thesis, University of Leeds, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303425.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Sajja, Abhilash. "Forensic Reconstruction of Fragmented Variable Bitrate MP3 files". ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1258.

Texto completo da fonte
Resumo:
File carving is a technique used to recover data from a digital device without the help of file system metadata. The current file carvers use techniques such as using a list of header and footer values and key word searching to retrieve the information specific to a file type. These techniques tend to fail when the files to be recovered are fragmented. Recovering the fragmented files is one of the primary challenges faced by file carving. In this research we focus on Variable Bit Rate (VBR) MP3 files. MP3 is one of the most widely used file formats for storing audio data. We develop a technique which uses the MP3 file structure information to improve the performance of file carvers in reconstructing fragmented MP3 data. This technique uses a large number of MP3 files and performs statistical analysis on the bitrates of these files.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Cheng, Yafeng. "Functional regression analysis and variable selection for motion data". Thesis, University of Newcastle upon Tyne, 2016. http://hdl.handle.net/10443/3150.

Texto completo da fonte
Resumo:
Modern technology o ers us highly evolved data collection devices. They allow us to observe data densely over continua such as time, distance, space and so on. The observations are normally assumed to follow certain continuous and smooth underline functions of the continua. Thus the analysis must consider two important properties of functional data: infinite dimension and the smoothness. Traditional multivariate data analysis normally works with low dimension and independent data. Therefore, we need to develop new methodology to conduct functional data analysis. In this thesis, we first study the linear relationship between a scalar variable and a group of functional variables using three di erent discrete methods. We combine this linear relationship with the idea from least angle regression to propose a new variable selection method, named as functional LARS. It is designed for functional linear regression with scalar response and a group of mixture of functional and scalar variables. We also propose two new stopping rules for the algorithm, since the conventional stopping rules may fail for functional data. The algorithm can be used when there are more variables than samples. The performance of the algorithm and the stopping rules is compared with existed algorithms by comprehensive simulation studies. The proposed algorithm is applied to analyse motion data including scalar response, more than 200 scalar covariates and 500 functional covariates. Models with or without functional variables are compared. We have achieved very accurate results for this complex data particularly the models including functional covariates. The research in functional variable selection is limited due to its complexity and onerous computational burdens. We have demonstrated that the proposed functional LARS is a very e cient method and can cope with functional data very large dimension. The methodology and the idea have the potential to be used to address other challenging problems in functional data analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Lan, Lan. "Variable Selection in Linear Mixed Model for Longitudinal Data". NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-05172006-211924/.

Texto completo da fonte
Resumo:
Fan and Li (JASA, 2001) proposed a family of variable selection procedures for certain parametric models via a nonconcave penalized likelihood approach, where significant variable selection and parameter estimation were done simultaneously, and the procedures were shown to have the oracle property. In this presentation, we extend the nonconcave penalized likelihood approach to linear mixed models for longitudinal data. Two new approaches are proposed to select significant covariates and estimate fixed effect parameters and variance components. In particular, we show the new approaches also possess the oracle property when the tuning parameter is chosen appropriately. We assess the performance of the proposed approaches via simulation and apply the procedures to data from the Multicenter AIDS Cohort Study.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Mainguy, Yves. "A robust variable order facet model for image data". Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10222009-124949/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Lin, Cheng-Han, e 林承翰. "A variable block size pattern run-length codingfor test data compression". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/5f3443.

Texto completo da fonte
Resumo:
碩士
元智大學
資訊工程學系
105
The testing data compression is one of popular topics in VLSI testing field. It also is the key to solve the huge data test data. Built-in-self-test (BIST) architecture which can test the circuit itself can generate the data inside the circuit under test without any other externality, resulting the reduction of test data volume. In this thesis, we proposed a code-based test data compression approach to achieve a better test data compression ratio by using the variable block size pattern run-length. By using benchmark ISCAS’89 circuits, experimental results show that the proposed approach can achieve test data compression ratio up to 70.42%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Yang, Hsien-Yi, e 楊顯奕. "Variable Pattern Size Pattern Run-length Coding Based on Fibonacci Number for Test data Compression". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nu7cfb.

Texto completo da fonte
Resumo:
碩士
元智大學
資訊工程學系
107
While the technology keep growing, the volume of the integrated circuit keeps decreasing, and the circuit becomes more and more tight. Nowadays, IC is developed into 3D type, it is much more complex than 2D plane, and the growth of the data quantity is also not appreciable. Besides the speed of computing by the software and hardware, the data size is also a very important factor in the discussion of VLSI (Very large scale integration) testing. Data compression makes the data size become much smaller and allows us to compute much more data in the same period of time. The topic of this thesis is to compare two different kind of data compression and find out the better compressing rate. First of two, compresses data by different factors such as variable block size bit, variable pattern length, data inverse flag, and repeat record into a variable codeword. Second of two, fix the block size bit to 3 bits and represent the pattern length by Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 34) instead of 1 to 8. Since we use less block size bit represents larger pattern length, this is discussable. According to the testing of six circuits of benchmark ISCAS’89, we get compression rate of two different compressing methods and found out that Fibonacci compression has a chance to get better compressing rate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

WANG, YU-TZU, e 王愉慈. "A Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/28999648088974917843.

Texto completo da fonte
Resumo:
碩士
玄奘大學
資訊管理學系碩士班
104
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colors, black or white, it is hard to hide data imperceptible. The capacities and imperception are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image into several non-overlapping(2k+1)×(2k+1)sub-blocks in an M by N host image as many as possible, where k=1,2,3,…,or min (M,N). Then we partition each sub-block into four overlapping(k+1)×(k+1)sub-blocks. We skip the all blacks or all whites in each(2k+1)×(2k+1)sub-blocks. We consider all four(k+1)×(k+1)sub-blocks to check the XOR between the non-overlapping parts and center pixel of the(2k+1)×(2k+1)sub-block, it embed k^2bits in each(k+1)×(k+1)sub-block, totally are4×k^2. The entire host image can be embedded 4× k^2×M/(2k+1)×N/(2k+1)bits. The extraction way is simply to test the XOR between center pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperception that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

CHEN, JI-MING, e 陳紀銘. "An Optimal Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2ta8pc.

Texto completo da fonte
Resumo:
碩士
玄奘大學
資訊管理學系碩士班
105
In this thesis, we propose a high capacity data hiding method applying in binary images. We divide the host image into several non-overlapping blocks as many as possible. Then we partition each block into four overlapping sub-blocks. We skip the all blacks or all whites in each block. We consider all four sub-blocks to check the XOR between the nonoverlapping parts and the center pixel of the block. The entire host image can be embedded 4×m×n×M/(2m+1)×N/(2n+1) bits. The extraction way is simply to test the XOR between center pixel with its non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The optimal means the partitioning sub-block may affect the capacities and imperception that we can reach the best. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

SHIH, CHENG-FU, e 施承甫. "A Reversible Data Hiding Method Based on Partition Variable Block Size and Exclusive-OR Operation with Two Host Images for Binary Image". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/yhm8ny.

Texto completo da fonte
Resumo:
碩士
玄奘大學
資訊管理學系碩士班
106
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colours, black or white, it is hard to hide data imperceptible. The capacities and imperceptions are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image C and R into several non-overlapping(2m+1)×(2n+1)sub-blocks in an M by N host image as many as possible, where m=1,2,3…,n=1,2,3…,or min (M,N). Then we partition each sub-block into four overlapping(m+1)×(n+1)sub-blocks. We skip the all blacks or all whites in each(2m+1)×(2n+1)sub-blocks. We consider all four(m+1)×(n+1)sub-blocks to check the XOR between the non-overlapping parts and centre pixel of the(2m+1)×(2n+1)sub-block, it embed m×n bits in each(m+1)×(n+1)sub-block, totally are 4×m×n. When candidate pixel of C is embedded secret bit and changed, the corresponding position pixel of R will be marked 1. The entire host image can be embedded 4× m×n×M/(2m+1)×N/(2n+1)bits. The extraction way is simply to test the XOR between centre pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperceptions that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless, also used the R host image to reverse the original host image completely.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Pham, Tung Huy. "Some problems in high dimensional data analysis". 2010. http://repository.unimelb.edu.au/10187/8399.

Texto completo da fonte
Resumo:
The bloom of economics and technology has had an enormous impact on society. Along with these developments, human activities nowadays produce massive amounts of data that can be easily collected for relatively low cost with the aid of new technologies. Many examples can be mentioned here including data from web term-document data, sensor arrays, gene expression, finance data, imaging and hyperspectral analysis. Because of the enormous amount of data from various different and new sources, more and more challenging scientific problems appear. These problems have changed the types of problems which mathematical scientists work.
In traditional statistics, the dimension of the data, p say, is low, with many observations, n say. In this case, classical rules such as the Central Limit Theorem are often applied to obtain some understanding from data. A new challenge to statisticians today is dealing with a different setting, when the data dimension is very large and the number of observations is small. The mathematical assumption now could be p > n, or even p goes to infinity and n fixed in many cases, for example, there are few patients with many genes. In these cases, classical methods fail to produce a good understanding of the nature of the problem. Hence, new methods need to be found to solve these problems. Mathematical explanations are also needed to generalize these cases.
The research preferred in this thesis includes two problems: Variable selection and Classification, in the case where the dimension is very large. The work on variable selection problems, in particular the Adaptive Lasso was completed by June 2007 and the research on classification has been carried out through out 2008 and 2009. The research on the Dantzig selector and the Lasso were finished in July 2009. Therefore, this thesis is divided into two parts. In the first part of the thesis we study the Adaptive Lasso, the Lasso and the Dantzig selector. In particular, in Chapter 2 we present some results for the Adaptive Lasso. Chapter 3 will provides two examples that show that neither the Dantzig selector or the Lasso is definitely better than the other. The second part of the thesis is organized as follows. In Chapter 5, we shall construct the model setting. In Chapter 6, we summarize the results of the scaled centroid-based classifier. We also prove some results on the scaled centroid-based classifier. Because there are similarities between the Support Vector Machine (SVM) and Distance Weighted Discrimination (DWD) classifiers, Chapter 8 introduces a class of distance-based classifiers that could be considered a generalization of the SVM and DWD classifiers. Chapters 9 and 10 are about the SVM and DWD classifiers. Chapter 11 demonstrates the performance of these classifiers on simulated data sets and some cancer data sets.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Muthulaxmi, S. "Emulating Variable Block Size Caches". Thesis, 1998. https://etd.iisc.ac.in/handle/2005/2184.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Muthulaxmi, S. "Emulating Variable Block Size Caches". Thesis, 1998. http://etd.iisc.ernet.in/handle/2005/2184.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Chen, Wei-Da, e 陳威達. "Variable Block Size Reversible Image Watermarking Approach". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63429040511853512471.

Texto completo da fonte
Resumo:
碩士
玄奘大學
資訊科學學系碩士班
97
A reversible watermarking approach recovers the original image from a watermarked image after extracting the embedded watermarks. This paper presents a variable block size reversible image watermarking approach. The proposed method first segments an image to 8×8, 4×4 or 2×2 blocks according to their block structures. Then, the differences between central pixel and other pixels in each block are enlarged. At last, watermarks are embedded into LSB bits of above differences. Experimental results show that the proposed variable block size method has higher capacity than conventional fixed block size method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Huang, Zheng-Bin, e 黃正斌. "Variable block size true motion estimation algorithm". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/60609432991123926256.

Texto completo da fonte
Resumo:
碩士
國立成功大學
電腦與通信工程研究所
94
The quality of the motion vector based interpolated frame in frame rate up conversion is predominantly dependent on the accuracy of the motion vectors. In this Thesis, we propose a robust true motion estimation algorithm to enhance the accuracy of the motion vector field. Several techniques used in general video coding systems are introduced into this algorithm. Firstly, a technique of multi-pass motion search can refine motion vector field more accurately as spatial motion correlation is growing up with passes. Secondly, according to the shape of moving objects, variable block size will apply a suitable block size to motion estimation. Thirdly, the methods of converge propagation and few candidate search points can efficiently reduce the time-consume multi-pass motion search. Finally, differing to traditional SAD measurement, a new distortion criterion is proposed to enhance resistance to noise and shadow.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Jian, Jhih-Wei, e 簡智韋. "Variable Block size Wavelet-Transform Medical Image Coding". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/38954297042128467624.

Texto completo da fonte
Resumo:
碩士
國立臺灣海洋大學
通訊與導航工程系
98
Wavelet transform technique is widely used in image compression because it provides a multiresolution representation of images. With vector quantization algorithm, the wavelet transformed coefficients can be compressed further. In this study, we have proposed a quadtree segmentation method for medical image preprocess. Quadtree segmentation algorithm is used to divide a given MRI medical image, where regions with image detail will be segmented into blocks with smaller block size, and the background of the image will be assigned larger block size. Choosing proper size of vector quantized codebook after the wavelet transform, we have applied bit allocation assignment associated with the variance of each sub band image block. For this proposed medical image compression scheme, simulation results show acceptable visual quality and good compression ratio simultaneously. Furthermore, due to the codebook size been reduced, we are able to save the computational time. System performance analysis is also demonstrated in this thesis. Key Words : Quadtree Segmentation, Wavelet Transform
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Chen, Jing Jhih, e 陳景智. "Design and Implementation of H.264 Variable Block Size Motion Design and Implementation of H.264 Variable Block Size Motion Estimation". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/65097939673894177646.

Texto completo da fonte
Resumo:
碩士
清雲科技大學
電子工程研究所
94
The block occupies is important position than to the algorithm of performing in the code system of the dynamic image. Because it, for dispelling the redundancy on time in the code system of video-information, it is a simplest and effective method. Thesis this adopt one efficient hardware structure to accomplish H.264 variable block matching. This hardware circuit by way of the prove simulation, and H.264 variable block than practical operation of the Xilinx FPGA. This text adopts FPGA that three kinds of Xilinx introduce and develops the board and chip type, elect the chip most suitable for the structure of a hardware, develop the mode of variable block circuit.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Hwang, Chien-Hsin, e 黃謙信. "The Transient Analysis of the Variable Step Size and the Variable Regularization NLMS Algorithm". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/343q8n.

Texto completo da fonte
Resumo:
碩士
元智大學
電機工程學系
105
In this paper, we doing the transient analysis of two proposed adaptive algorithm for digital filters. One is the particular variable step size-NLMS(VSS-NLMS) algorithm , another is the particular variable regularization-NLMS(VR-NLMS). We refer to the process of the NLMS’s transient analysis in the reference, and use some approximate assumption to help derive the transient analysis of the two algorithm. Finally, we absolute the goodness of fit by using the computer simulation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Huang, Guo-Tai, e 黃國泰. "A Study of Control Charts with Variable Sample Size". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/51993886110283599484.

Texto completo da fonte
Resumo:
碩士
國立中山大學
應用數學系研究所
92
Shewhart X bar control charts with estimated control limits are widely used in practice. When the sample size is not fixed,we propose seven statistics to estimate the standard deviation sigma . These estimators are applied to estimate the control limits of Shewhart X bar control chart. The estimated results through simulated computation are given and discussed. Finally, we investigate the performance of the Shewhart X bar control charts based on the seven estimators of sigma via its simulated average run length (ARL).
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Kern, Ludwig August. "Prototype particle size analyzer incorporating variable focal length optics". Thesis, 1987. http://hdl.handle.net/10945/22443.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

"Variable block size motion estimation hardware for video encoders". 2007. http://library.cuhk.edu.hk/record=b5893113.

Texto completo da fonte
Resumo:
Li, Man Ho.
Thesis submitted in: November 2006.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 137-143).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation --- p.3
Chapter 1.2 --- The objectives of this thesis --- p.4
Chapter 1.3 --- Contributions --- p.5
Chapter 1.4 --- Thesis structure --- p.6
Chapter 2 --- Digital video compression --- p.8
Chapter 2.1 --- Introduction --- p.8
Chapter 2.2 --- Fundamentals of lossy video compression --- p.9
Chapter 2.2.1 --- Video compression and human visual systems --- p.10
Chapter 2.2.2 --- Representation of color --- p.10
Chapter 2.2.3 --- Sampling methods - frames and fields --- p.11
Chapter 2.2.4 --- Compression methods --- p.11
Chapter 2.2.5 --- Motion estimation --- p.12
Chapter 2.2.6 --- Motion compensation --- p.13
Chapter 2.2.7 --- Transform --- p.13
Chapter 2.2.8 --- Quantization --- p.14
Chapter 2.2.9 --- Entropy Encoding --- p.14
Chapter 2.2.10 --- Intra-prediction unit --- p.14
Chapter 2.2.11 --- Deblocking filter --- p.15
Chapter 2.2.12 --- Complexity analysis of on different com- pression stages --- p.16
Chapter 2.3 --- Motion estimation process --- p.16
Chapter 2.3.1 --- Block-based matching method --- p.16
Chapter 2.3.2 --- Motion estimation procedure --- p.18
Chapter 2.3.3 --- Matching Criteria --- p.19
Chapter 2.3.4 --- Motion vectors --- p.21
Chapter 2.3.5 --- Quality judgment --- p.22
Chapter 2.4 --- Block-based matching algorithms for motion estimation --- p.23
Chapter 2.4.1 --- Full search (FS) --- p.23
Chapter 2.4.2 --- Three-step search (TSS) --- p.24
Chapter 2.4.3 --- Two-dimensional Logarithmic Search Algorithm (2D-log search) --- p.25
Chapter 2.4.4 --- Diamond Search (DS) --- p.25
Chapter 2.4.5 --- Fast full search (FFS) --- p.26
Chapter 2.5 --- Complexity analysis of motion estimation --- p.27
Chapter 2.5.1 --- Different searching algorithms --- p.28
Chapter 2.5.2 --- Fixed-block size motion estimation --- p.28
Chapter 2.5.3 --- Variable block size motion estimation --- p.29
Chapter 2.5.4 --- Sub-pixel motion estimation --- p.30
Chapter 2.5.5 --- Multi-reference frame motion estimation . --- p.30
Chapter 2.6 --- Picture quality analysis --- p.31
Chapter 2.7 --- Summary --- p.32
Chapter 3 --- Arithmetic for video encoding --- p.33
Chapter 3.1 --- Introduction --- p.33
Chapter 3.2 --- Number systems --- p.34
Chapter 3.2.1 --- Non-redundant Number System --- p.34
Chapter 3.2.2 --- Redundant number system --- p.36
Chapter 3.3 --- Addition/subtraction algorithm --- p.38
Chapter 3.3.1 --- Non-redundant number addition --- p.39
Chapter 3.3.2 --- Carry-save number addition --- p.39
Chapter 3.3.3 --- Signed-digit number addition --- p.40
Chapter 3.4 --- Bit-serial algorithms --- p.42
Chapter 3.4.1 --- Least-significant-bit (LSB) first mode --- p.42
Chapter 3.4.2 --- Most-significant-bit (MSB) first mode --- p.43
Chapter 3.5 --- Absolute difference algorithm --- p.44
Chapter 3.5.1 --- Non-redundant algorithm for absolute difference --- p.44
Chapter 3.5.2 --- Redundant algorithm for absolute difference --- p.45
Chapter 3.6 --- Multi-operand addition algorithm --- p.47
Chapter 3.6.1 --- Bit-parallel non-redundant adder tree implementation --- p.47
Chapter 3.6.2 --- Bit-parallel carry-save adder tree implementation --- p.49
Chapter 3.6.3 --- Bit serial signed digit adder tree implementation --- p.49
Chapter 3.7 --- Comparison algorithms --- p.50
Chapter 3.7.1 --- Non-redundant comparison algorithm --- p.51
Chapter 3.7.2 --- Signed-digit comparison algorithm --- p.52
Chapter 3.8 --- Summary --- p.53
Chapter 4 --- VLSI architectures for video encoding --- p.54
Chapter 4.1 --- Introduction --- p.54
Chapter 4.2 --- Implementation platform - (FPGA) --- p.55
Chapter 4.2.1 --- Basic FPGA architecture --- p.55
Chapter 4.2.2 --- DSP blocks in FPGA device --- p.56
Chapter 4.2.3 --- Advantages employing FPGA --- p.57
Chapter 4.2.4 --- Commercial FPGA Device --- p.58
Chapter 4.3 --- Top level architecture of motion estimation processor --- p.59
Chapter 4.4 --- Bit-parallel architectures for motion estimation --- p.60
Chapter 4.4.1 --- Systolic arrays --- p.60
Chapter 4.4.2 --- Mapping of a motion estimation algorithm onto systolic array --- p.61
Chapter 4.4.3 --- 1-D systolic array architecture (LA-ID) --- p.63
Chapter 4.4.4 --- 2-D systolic array architecture (LA-2D) --- p.64
Chapter 4.4.5 --- 1-D Tree architecture (GA-1D) --- p.64
Chapter 4.4.6 --- 2-D Tree architecture (GA-2D) --- p.65
Chapter 4.4.7 --- Variable block size support in bit-parallel architectures --- p.66
Chapter 4.5 --- Bit-serial motion estimation architecture --- p.68
Chapter 4.5.1 --- Data Processing Direction --- p.68
Chapter 4.5.2 --- Algorithm mapping and dataflow design . --- p.68
Chapter 4.5.3 --- Early termination scheme --- p.69
Chapter 4.5.4 --- Top-level architecture --- p.70
Chapter 4.5.5 --- Non redundant positive number to signed digit conversion --- p.71
Chapter 4.5.6 --- Signed-digit adder tree --- p.73
Chapter 4.5.7 --- SAD merger --- p.74
Chapter 4.5.8 --- Signed-digit comparator --- p.75
Chapter 4.5.9 --- Early termination controller --- p.76
Chapter 4.5.10 --- Data scheduling and timeline --- p.80
Chapter 4.6 --- Decision metric in different architectural types . . --- p.80
Chapter 4.6.1 --- Throughput --- p.81
Chapter 4.6.2 --- Memory bandwidth --- p.83
Chapter 4.6.3 --- Silicon area occupied and power consump- tion --- p.83
Chapter 4.7 --- Architecture selection for different applications . . --- p.84
Chapter 4.7.1 --- CIF and QCIF resolution --- p.84
Chapter 4.7.2 --- SDTV resolution --- p.85
Chapter 4.7.3 --- HDTV resolution --- p.85
Chapter 4.8 --- Summary --- p.86
Chapter 5 --- Results and comparison --- p.87
Chapter 5.1 --- Introduction --- p.87
Chapter 5.2 --- Implementation details --- p.87
Chapter 5.2.1 --- Bit-parallel 1-D systolic array --- p.88
Chapter 5.2.2 --- Bit-parallel 2-D systolic array --- p.89
Chapter 5.2.3 --- Bit-parallel Tree architecture --- p.90
Chapter 5.2.4 --- MSB-first bit-serial design --- p.91
Chapter 5.3 --- Comparison between motion estimation architectures --- p.93
Chapter 5.3.1 --- Throughput and latency --- p.93
Chapter 5.3.2 --- Occupied resources --- p.94
Chapter 5.3.3 --- Memory bandwidth --- p.95
Chapter 5.3.4 --- Motion estimation algorithm --- p.95
Chapter 5.3.5 --- Power consumption --- p.97
Chapter 5.4 --- Comparison to ASIC and FPGA architectures in past literature --- p.99
Chapter 5.5 --- Summary --- p.101
Chapter 6 --- Conclusion --- p.102
Chapter 6.1 --- Summary --- p.102
Chapter 6.1.1 --- Algorithmic optimizations --- p.102
Chapter 6.1.2 --- Architecture and arithmetic optimizations --- p.103
Chapter 6.1.3 --- Implementation on a FPGA platform . . . --- p.104
Chapter 6.2 --- Future work --- p.106
Chapter A --- VHDL Sources --- p.108
Chapter A.1 --- Online Full Adder --- p.108
Chapter A.2 --- Online Signed Digit Full Adder --- p.109
Chapter A.3 --- Online Pull Adder Tree --- p.110
Chapter A.4 --- SAD merger --- p.112
Chapter A.5 --- Signed digit adder tree stage (top) --- p.116
Chapter A.6 --- Absolute element --- p.118
Chapter A.7 --- Absolute stage (top) --- p.119
Chapter A.8 --- Online comparator element --- p.120
Chapter A.9 --- Comparator stage (top) --- p.122
Chapter A.10 --- MSB-first motion estimation processor --- p.134
Bibliography --- p.137
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Yang, Hau-Yu, e 楊濠宇. "The study of Variable Sample Size Cpm Control Chart". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/4ksa47.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Liu, Han-Sheng, e 劉瀚升. "Parallel VLSI Architectures for Variable Block Size Motion Estimation". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/08983155936817630201.

Texto completo da fonte
Resumo:
碩士
國立東華大學
資訊工程學系
98
The H.264/AVC video coding standard was recently developed by the Joint Video Team (JVT) consisting of experts from international study groups, Video Coding Experts Group (VCEG) and Moving Picture Experts Group (MPEG), which significantly improves video coding. Motion estimation is one of the core designs of the H.264/AVC video coding. Variable block size motion estimation (VBSME) is a new video coding technique which improves video distortion, provides more accurate predictions, reduces video coding data, and increases the utilization of network bandwidth. This thesis proposes parallel VLSI architectures for VBSME which apply to the full search block matching algorithm (FSBMA). Our proposed architecture use pipelined design to balance the execution time of each stage in order to increase the performance. Furthermore, our design employs parallel architectures to improve the throughput, and facilitate lower computation time. With the pipelined design, the processing elements use hierarchical structures to calculate seven kinds of blocks (4×4, 8×4, 4×8, 8×8, 16×8, 8×16, and 16×16), which have relatively simple circuits and relatively low computation complexity. We use cell-based design with TSMC 0.18 μm CMOS technology to implement our hardware. Our proposed architecture is realized with physical design flow to show its feasibility. Experimental results show that our proposed parallel architectures can increase the performance and reduce the computational complexity compared to other designs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia