Dissertations / Theses on the topic 'Data of variable size'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Data of variable size.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Chen, Haiying. "Ranked set sampling for binary and ordered categorical variables with applications in health survey data." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092770729.
Full textTitle from first page of PDF file. Document formatted into pages; contains xiii, 109 p.; also includes graphics Includes bibliographical references (p. 99-102). Available online via OhioLINK's ETD Center
Liv, Per. "Efficient strategies for collecting posture data using observation and direct measurement." Doctoral thesis, Umeå universitet, Yrkes- och miljömedicin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59132.
Full textHögberg, Hans. "Some properties of measures of disagreement and disorder in paired ordinal data." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro universitet, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-12350.
Full textStatistical methods for ordinal data
Fakhouri, Elie Michel. "Variable block-size motion estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ37260.pdf.
Full textRuengvirayudh, Pornchanok. "A Monte Carlo Study of Parallel Analysis, Minimum Average Partial, Indicator Function, and Modified Average Roots for Determining the Number of Dimensions with Binary Variables in Test Data: Impact of Sample Size and Factor Structure." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou151516919677091.
Full textNataša, Krklec Jerinkić. "Line search methods with variable sample size." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. http://dx.doi.org/10.2298/NS20140117KRKLEC.
Full textU okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki trošak i uobičajeno je da se ukupan trošak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne trošskove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti (velikog uzorka) kada smo daleko od rešsenja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija rešenja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.Još jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vršsi pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono što znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja troškova optimizacije kao i proširenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.Numerički rezultati su predstavljeni u šestom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u ponašanje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim šemama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosadašnjih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.
Hintze, Christopher Jerry. "Modeling correlation in binary count data with application to fragile site identification." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4278.
Full textSodagari, Shabnam. "Variable block-size disparity estimation in stereo imagery." Thesis, University of Ottawa (Canada), 2003. http://hdl.handle.net/10393/26399.
Full textDziminski, Martin A. "The evolution of variable offspring provisioning." University of Western Australia, 2005. http://theses.library.uwa.edu.au/adt-WU2005.0134.
Full textAcuna, Stamp Annabelen. "Design Study for Variable Data Printing." University of Cincinnati / OhioLINK, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=ucin962378632.
Full textShomper, Keith A. "Visualizing program variable data for debugging /." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487848531364488.
Full textWien, Mathias [Verfasser]. "Variable Block-Size Transforms for Hybrid Video Coding / Mathias Wien." Aachen : Shaker, 2004. http://d-nb.info/1172614245/34.
Full textDominicus, Annica. "Latent variable models for longitudinal twin data." Doctoral thesis, Stockholm : Mathematical statistics, Dept. of mathematics, Stockholm university, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-848.
Full textAlbanese, Maria Teresinha. "Latent variable models for binary response data." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/1220/.
Full textYacoub, Francois MacGregor John Frederick. "Learning from data using latent variable methods." *McMaster only, 2006.
Find full textMcClelland, Robyn L. "Regression based variable clustering for data reduction /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9611.
Full textHuang, Shiping. "Exploratory visualization of data with variable quality." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-225546/.
Full textPERRA, SILVIA. "Objective bayesian variable selection for censored data." Doctoral thesis, Università degli Studi di Cagliari, 2013. http://hdl.handle.net/11584/266108.
Full textMoreno, Carlos 1965. "Variable frame size for vector quantization and application to speech coding." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99001.
Full textIn the case of VQ applied to speech signals, the input signal is divided into frames of a given length. Depending on the particular technique being used, the system either extracts a vector representation of the whole frame (usually some form of spectral representation), or applies some processing to the signal and uses the processed frame itself as the vector to be quantized. The two techniques are often combined, and the system uses VQ for the spectral representation of the frame and also for the processed frame.
A typical assumption in this scheme is the fact that the frame size is fixed. This simplifies the scheme and thus reduces the computing-power requirements for a practical implementation.
In this study, we present a modification to this technique that allows for variable size frames, providing an additional degree of freedom for the optimization of the Data Compression process.
The quantization error is minimized by choosing the closest point in the codebook for the given frame. We now minimize this by choosing the frame size that yields the lowest quantization error---notice that the quantization error is a function of the given frame and the codebook; by considering different frame sizes, we get different actual frames that yield different quantization errors, allowing us to choose the optimal size, effectively providing a second level of optimization.
This idea has two caveats; we require additional data to represent the frame, since we have to indicate the size that was used. Also, the complexity of the system increases, since we have to try different frame sizes, requiring more computing-power for a practical implementation of the scheme.
The results of this study show that this technique effectively improves the quality of the compressed signal at a given compression ratio, even if the improvement is not dramatic. Whether or not the increase in complexity is worth the quality improvement for a given application depends entirely on the design constraints for that particular application.
Leek, Jeffrey Tullis. "Surrogate variable analysis /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9586.
Full textBroc, Camilo. "Variable selection for data aggregated from different sources with group of variable structure." Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3048.
Full textDuring the last decades, the amount of available genetic data on populations has growndrastically. From one side, a refinement of chemical technologies have made possible theextraction of the human genome of individuals at an accessible cost. From the other side,consortia of institutions and laboratories around the world have permitted the collectionof data on a variety of individuals and population. This amount of data raised hope onour ability to understand the deepest mechanisms involved in the functioning of our cells.Notably, genetic epidemiology is a field that studies the relation between the geneticfeatures and the onset of a disease. Specific statistical methods have been necessary forthose analyses, especially due to the dimensions of available data: in genetics, informationis contained in a high number of variables compared to the number of observations.In this dissertation, two contributions are presented. The first project called PIGE (Pathway-Interaction Gene Environment) deals with gene-environment interaction assessments.The second one aims at developing variable selection methods for data which has groupstructures in both the variables and the observations.The document is divided into six chapters. The first chapter sets the background of this work,where both biological and mathematical notations and concepts are presented and gives ahistory of the motivation behind genetics and genetic epidemiology. The second chapterpresent an overview of the statistical methods currently in use for genetic epidemiology.The third chapter deals with the identification of gene-environment interactions. It includesa presentation of existing approaches for this problem and a contribution of the thesis. Thefourth chapter brings off the problem of meta-analysis. A definition of the problem and anoverview of the existing approaches are presented. Then, a new approach is introduced.The fifth chapter explains the pleiotropy studies and how the method presented in theprevious chapter is suited for this kind of analysis. The last chapter compiles conclusionsand research lines for the future
Ahn, Jeongyoun Marron James Stephen. "High dimension, low sample size data analysis." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,375.
Full textTitle from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
Faletra, Melissa Kathleen. "Segregation of Particles of Variable Size and Density in Falling Suspension Droplets." ScholarWorks @ UVM, 2014. http://scholarworks.uvm.edu/graddis/265.
Full textDunlap, Mickey Paul. "Using the bootstrap to analyze variable stars data." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1398.
Full textGuo, Lei. "Bayesian Biclustering on Discrete Data: Variable Selection Methods." Thesis, Harvard University, 2013. http://dissertations.umi.com/gsas.harvard:11201.
Full textStatistics
Harrison, Wendy Jane. "Latent variable modelling for complex observational health data." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16384/.
Full textWilliams, Andrea E. Gilbert Juan E. "Usability size N." Auburn, Ala., 2007. http://hdl.handle.net/10415/1386.
Full textWalder, Alistair Neil. "Statistics of shape and size for landmark data." Thesis, University of Leeds, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303425.
Full textSajja, Abhilash. "Forensic Reconstruction of Fragmented Variable Bitrate MP3 files." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1258.
Full textCheng, Yafeng. "Functional regression analysis and variable selection for motion data." Thesis, University of Newcastle upon Tyne, 2016. http://hdl.handle.net/10443/3150.
Full textLan, Lan. "Variable Selection in Linear Mixed Model for Longitudinal Data." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-05172006-211924/.
Full textMainguy, Yves. "A robust variable order facet model for image data." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-10222009-124949/.
Full textLin, Cheng-Han, and 林承翰. "A variable block size pattern run-length codingfor test data compression." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/5f3443.
Full text元智大學
資訊工程學系
105
The testing data compression is one of popular topics in VLSI testing field. It also is the key to solve the huge data test data. Built-in-self-test (BIST) architecture which can test the circuit itself can generate the data inside the circuit under test without any other externality, resulting the reduction of test data volume. In this thesis, we proposed a code-based test data compression approach to achieve a better test data compression ratio by using the variable block size pattern run-length. By using benchmark ISCAS’89 circuits, experimental results show that the proposed approach can achieve test data compression ratio up to 70.42%.
Yang, Hsien-Yi, and 楊顯奕. "Variable Pattern Size Pattern Run-length Coding Based on Fibonacci Number for Test data Compression." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nu7cfb.
Full text元智大學
資訊工程學系
107
While the technology keep growing, the volume of the integrated circuit keeps decreasing, and the circuit becomes more and more tight. Nowadays, IC is developed into 3D type, it is much more complex than 2D plane, and the growth of the data quantity is also not appreciable. Besides the speed of computing by the software and hardware, the data size is also a very important factor in the discussion of VLSI (Very large scale integration) testing. Data compression makes the data size become much smaller and allows us to compute much more data in the same period of time. The topic of this thesis is to compare two different kind of data compression and find out the better compressing rate. First of two, compresses data by different factors such as variable block size bit, variable pattern length, data inverse flag, and repeat record into a variable codeword. Second of two, fix the block size bit to 3 bits and represent the pattern length by Fibonacci sequence (1, 2, 3, 5, 8, 13, 21, 34) instead of 1 to 8. Since we use less block size bit represents larger pattern length, this is discussable. According to the testing of six circuits of benchmark ISCAS’89, we get compression rate of two different compressing methods and found out that Fibonacci compression has a chance to get better compressing rate.
WANG, YU-TZU, and 王愉慈. "A Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/28999648088974917843.
Full text玄奘大學
資訊管理學系碩士班
104
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colors, black or white, it is hard to hide data imperceptible. The capacities and imperception are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image into several non-overlapping(2k+1)×(2k+1)sub-blocks in an M by N host image as many as possible, where k=1,2,3,…,or min (M,N). Then we partition each sub-block into four overlapping(k+1)×(k+1)sub-blocks. We skip the all blacks or all whites in each(2k+1)×(2k+1)sub-blocks. We consider all four(k+1)×(k+1)sub-blocks to check the XOR between the non-overlapping parts and center pixel of the(2k+1)×(2k+1)sub-block, it embed k^2bits in each(k+1)×(k+1)sub-block, totally are4×k^2. The entire host image can be embedded 4× k^2×M/(2k+1)×N/(2k+1)bits. The extraction way is simply to test the XOR between center pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperception that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
CHEN, JI-MING, and 陳紀銘. "An Optimal Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2ta8pc.
Full text玄奘大學
資訊管理學系碩士班
105
In this thesis, we propose a high capacity data hiding method applying in binary images. We divide the host image into several non-overlapping blocks as many as possible. Then we partition each block into four overlapping sub-blocks. We skip the all blacks or all whites in each block. We consider all four sub-blocks to check the XOR between the nonoverlapping parts and the center pixel of the block. The entire host image can be embedded 4×m×n×M/(2m+1)×N/(2n+1) bits. The extraction way is simply to test the XOR between center pixel with its non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The optimal means the partitioning sub-block may affect the capacities and imperception that we can reach the best. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
SHIH, CHENG-FU, and 施承甫. "A Reversible Data Hiding Method Based on Partition Variable Block Size and Exclusive-OR Operation with Two Host Images for Binary Image." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/yhm8ny.
Full text玄奘大學
資訊管理學系碩士班
106
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colours, black or white, it is hard to hide data imperceptible. The capacities and imperceptions are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image C and R into several non-overlapping(2m+1)×(2n+1)sub-blocks in an M by N host image as many as possible, where m=1,2,3…,n=1,2,3…,or min (M,N). Then we partition each sub-block into four overlapping(m+1)×(n+1)sub-blocks. We skip the all blacks or all whites in each(2m+1)×(2n+1)sub-blocks. We consider all four(m+1)×(n+1)sub-blocks to check the XOR between the non-overlapping parts and centre pixel of the(2m+1)×(2n+1)sub-block, it embed m×n bits in each(m+1)×(n+1)sub-block, totally are 4×m×n. When candidate pixel of C is embedded secret bit and changed, the corresponding position pixel of R will be marked 1. The entire host image can be embedded 4× m×n×M/(2m+1)×N/(2n+1)bits. The extraction way is simply to test the XOR between centre pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperceptions that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless, also used the R host image to reverse the original host image completely.
Pham, Tung Huy. "Some problems in high dimensional data analysis." 2010. http://repository.unimelb.edu.au/10187/8399.
Full textIn traditional statistics, the dimension of the data, p say, is low, with many observations, n say. In this case, classical rules such as the Central Limit Theorem are often applied to obtain some understanding from data. A new challenge to statisticians today is dealing with a different setting, when the data dimension is very large and the number of observations is small. The mathematical assumption now could be p > n, or even p goes to infinity and n fixed in many cases, for example, there are few patients with many genes. In these cases, classical methods fail to produce a good understanding of the nature of the problem. Hence, new methods need to be found to solve these problems. Mathematical explanations are also needed to generalize these cases.
The research preferred in this thesis includes two problems: Variable selection and Classification, in the case where the dimension is very large. The work on variable selection problems, in particular the Adaptive Lasso was completed by June 2007 and the research on classification has been carried out through out 2008 and 2009. The research on the Dantzig selector and the Lasso were finished in July 2009. Therefore, this thesis is divided into two parts. In the first part of the thesis we study the Adaptive Lasso, the Lasso and the Dantzig selector. In particular, in Chapter 2 we present some results for the Adaptive Lasso. Chapter 3 will provides two examples that show that neither the Dantzig selector or the Lasso is definitely better than the other. The second part of the thesis is organized as follows. In Chapter 5, we shall construct the model setting. In Chapter 6, we summarize the results of the scaled centroid-based classifier. We also prove some results on the scaled centroid-based classifier. Because there are similarities between the Support Vector Machine (SVM) and Distance Weighted Discrimination (DWD) classifiers, Chapter 8 introduces a class of distance-based classifiers that could be considered a generalization of the SVM and DWD classifiers. Chapters 9 and 10 are about the SVM and DWD classifiers. Chapter 11 demonstrates the performance of these classifiers on simulated data sets and some cancer data sets.
Muthulaxmi, S. "Emulating Variable Block Size Caches." Thesis, 1998. https://etd.iisc.ac.in/handle/2005/2184.
Full textMuthulaxmi, S. "Emulating Variable Block Size Caches." Thesis, 1998. http://etd.iisc.ernet.in/handle/2005/2184.
Full textChen, Wei-Da, and 陳威達. "Variable Block Size Reversible Image Watermarking Approach." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63429040511853512471.
Full text玄奘大學
資訊科學學系碩士班
97
A reversible watermarking approach recovers the original image from a watermarked image after extracting the embedded watermarks. This paper presents a variable block size reversible image watermarking approach. The proposed method first segments an image to 8×8, 4×4 or 2×2 blocks according to their block structures. Then, the differences between central pixel and other pixels in each block are enlarged. At last, watermarks are embedded into LSB bits of above differences. Experimental results show that the proposed variable block size method has higher capacity than conventional fixed block size method.
Huang, Zheng-Bin, and 黃正斌. "Variable block size true motion estimation algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/60609432991123926256.
Full text國立成功大學
電腦與通信工程研究所
94
The quality of the motion vector based interpolated frame in frame rate up conversion is predominantly dependent on the accuracy of the motion vectors. In this Thesis, we propose a robust true motion estimation algorithm to enhance the accuracy of the motion vector field. Several techniques used in general video coding systems are introduced into this algorithm. Firstly, a technique of multi-pass motion search can refine motion vector field more accurately as spatial motion correlation is growing up with passes. Secondly, according to the shape of moving objects, variable block size will apply a suitable block size to motion estimation. Thirdly, the methods of converge propagation and few candidate search points can efficiently reduce the time-consume multi-pass motion search. Finally, differing to traditional SAD measurement, a new distortion criterion is proposed to enhance resistance to noise and shadow.
Jian, Jhih-Wei, and 簡智韋. "Variable Block size Wavelet-Transform Medical Image Coding." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/38954297042128467624.
Full text國立臺灣海洋大學
通訊與導航工程系
98
Wavelet transform technique is widely used in image compression because it provides a multiresolution representation of images. With vector quantization algorithm, the wavelet transformed coefficients can be compressed further. In this study, we have proposed a quadtree segmentation method for medical image preprocess. Quadtree segmentation algorithm is used to divide a given MRI medical image, where regions with image detail will be segmented into blocks with smaller block size, and the background of the image will be assigned larger block size. Choosing proper size of vector quantized codebook after the wavelet transform, we have applied bit allocation assignment associated with the variance of each sub band image block. For this proposed medical image compression scheme, simulation results show acceptable visual quality and good compression ratio simultaneously. Furthermore, due to the codebook size been reduced, we are able to save the computational time. System performance analysis is also demonstrated in this thesis. Key Words : Quadtree Segmentation, Wavelet Transform
Chen, Jing Jhih, and 陳景智. "Design and Implementation of H.264 Variable Block Size Motion Design and Implementation of H.264 Variable Block Size Motion Estimation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/65097939673894177646.
Full text清雲科技大學
電子工程研究所
94
The block occupies is important position than to the algorithm of performing in the code system of the dynamic image. Because it, for dispelling the redundancy on time in the code system of video-information, it is a simplest and effective method. Thesis this adopt one efficient hardware structure to accomplish H.264 variable block matching. This hardware circuit by way of the prove simulation, and H.264 variable block than practical operation of the Xilinx FPGA. This text adopts FPGA that three kinds of Xilinx introduce and develops the board and chip type, elect the chip most suitable for the structure of a hardware, develop the mode of variable block circuit.
Hwang, Chien-Hsin, and 黃謙信. "The Transient Analysis of the Variable Step Size and the Variable Regularization NLMS Algorithm." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/343q8n.
Full text元智大學
電機工程學系
105
In this paper, we doing the transient analysis of two proposed adaptive algorithm for digital filters. One is the particular variable step size-NLMS(VSS-NLMS) algorithm , another is the particular variable regularization-NLMS(VR-NLMS). We refer to the process of the NLMS’s transient analysis in the reference, and use some approximate assumption to help derive the transient analysis of the two algorithm. Finally, we absolute the goodness of fit by using the computer simulation.
Huang, Guo-Tai, and 黃國泰. "A Study of Control Charts with Variable Sample Size." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/51993886110283599484.
Full text國立中山大學
應用數學系研究所
92
Shewhart X bar control charts with estimated control limits are widely used in practice. When the sample size is not fixed,we propose seven statistics to estimate the standard deviation sigma . These estimators are applied to estimate the control limits of Shewhart X bar control chart. The estimated results through simulated computation are given and discussed. Finally, we investigate the performance of the Shewhart X bar control charts based on the seven estimators of sigma via its simulated average run length (ARL).
Kern, Ludwig August. "Prototype particle size analyzer incorporating variable focal length optics." Thesis, 1987. http://hdl.handle.net/10945/22443.
Full text"Variable block size motion estimation hardware for video encoders." 2007. http://library.cuhk.edu.hk/record=b5893113.
Full textThesis submitted in: November 2006.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.
Includes bibliographical references (leaves 137-143).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation --- p.3
Chapter 1.2 --- The objectives of this thesis --- p.4
Chapter 1.3 --- Contributions --- p.5
Chapter 1.4 --- Thesis structure --- p.6
Chapter 2 --- Digital video compression --- p.8
Chapter 2.1 --- Introduction --- p.8
Chapter 2.2 --- Fundamentals of lossy video compression --- p.9
Chapter 2.2.1 --- Video compression and human visual systems --- p.10
Chapter 2.2.2 --- Representation of color --- p.10
Chapter 2.2.3 --- Sampling methods - frames and fields --- p.11
Chapter 2.2.4 --- Compression methods --- p.11
Chapter 2.2.5 --- Motion estimation --- p.12
Chapter 2.2.6 --- Motion compensation --- p.13
Chapter 2.2.7 --- Transform --- p.13
Chapter 2.2.8 --- Quantization --- p.14
Chapter 2.2.9 --- Entropy Encoding --- p.14
Chapter 2.2.10 --- Intra-prediction unit --- p.14
Chapter 2.2.11 --- Deblocking filter --- p.15
Chapter 2.2.12 --- Complexity analysis of on different com- pression stages --- p.16
Chapter 2.3 --- Motion estimation process --- p.16
Chapter 2.3.1 --- Block-based matching method --- p.16
Chapter 2.3.2 --- Motion estimation procedure --- p.18
Chapter 2.3.3 --- Matching Criteria --- p.19
Chapter 2.3.4 --- Motion vectors --- p.21
Chapter 2.3.5 --- Quality judgment --- p.22
Chapter 2.4 --- Block-based matching algorithms for motion estimation --- p.23
Chapter 2.4.1 --- Full search (FS) --- p.23
Chapter 2.4.2 --- Three-step search (TSS) --- p.24
Chapter 2.4.3 --- Two-dimensional Logarithmic Search Algorithm (2D-log search) --- p.25
Chapter 2.4.4 --- Diamond Search (DS) --- p.25
Chapter 2.4.5 --- Fast full search (FFS) --- p.26
Chapter 2.5 --- Complexity analysis of motion estimation --- p.27
Chapter 2.5.1 --- Different searching algorithms --- p.28
Chapter 2.5.2 --- Fixed-block size motion estimation --- p.28
Chapter 2.5.3 --- Variable block size motion estimation --- p.29
Chapter 2.5.4 --- Sub-pixel motion estimation --- p.30
Chapter 2.5.5 --- Multi-reference frame motion estimation . --- p.30
Chapter 2.6 --- Picture quality analysis --- p.31
Chapter 2.7 --- Summary --- p.32
Chapter 3 --- Arithmetic for video encoding --- p.33
Chapter 3.1 --- Introduction --- p.33
Chapter 3.2 --- Number systems --- p.34
Chapter 3.2.1 --- Non-redundant Number System --- p.34
Chapter 3.2.2 --- Redundant number system --- p.36
Chapter 3.3 --- Addition/subtraction algorithm --- p.38
Chapter 3.3.1 --- Non-redundant number addition --- p.39
Chapter 3.3.2 --- Carry-save number addition --- p.39
Chapter 3.3.3 --- Signed-digit number addition --- p.40
Chapter 3.4 --- Bit-serial algorithms --- p.42
Chapter 3.4.1 --- Least-significant-bit (LSB) first mode --- p.42
Chapter 3.4.2 --- Most-significant-bit (MSB) first mode --- p.43
Chapter 3.5 --- Absolute difference algorithm --- p.44
Chapter 3.5.1 --- Non-redundant algorithm for absolute difference --- p.44
Chapter 3.5.2 --- Redundant algorithm for absolute difference --- p.45
Chapter 3.6 --- Multi-operand addition algorithm --- p.47
Chapter 3.6.1 --- Bit-parallel non-redundant adder tree implementation --- p.47
Chapter 3.6.2 --- Bit-parallel carry-save adder tree implementation --- p.49
Chapter 3.6.3 --- Bit serial signed digit adder tree implementation --- p.49
Chapter 3.7 --- Comparison algorithms --- p.50
Chapter 3.7.1 --- Non-redundant comparison algorithm --- p.51
Chapter 3.7.2 --- Signed-digit comparison algorithm --- p.52
Chapter 3.8 --- Summary --- p.53
Chapter 4 --- VLSI architectures for video encoding --- p.54
Chapter 4.1 --- Introduction --- p.54
Chapter 4.2 --- Implementation platform - (FPGA) --- p.55
Chapter 4.2.1 --- Basic FPGA architecture --- p.55
Chapter 4.2.2 --- DSP blocks in FPGA device --- p.56
Chapter 4.2.3 --- Advantages employing FPGA --- p.57
Chapter 4.2.4 --- Commercial FPGA Device --- p.58
Chapter 4.3 --- Top level architecture of motion estimation processor --- p.59
Chapter 4.4 --- Bit-parallel architectures for motion estimation --- p.60
Chapter 4.4.1 --- Systolic arrays --- p.60
Chapter 4.4.2 --- Mapping of a motion estimation algorithm onto systolic array --- p.61
Chapter 4.4.3 --- 1-D systolic array architecture (LA-ID) --- p.63
Chapter 4.4.4 --- 2-D systolic array architecture (LA-2D) --- p.64
Chapter 4.4.5 --- 1-D Tree architecture (GA-1D) --- p.64
Chapter 4.4.6 --- 2-D Tree architecture (GA-2D) --- p.65
Chapter 4.4.7 --- Variable block size support in bit-parallel architectures --- p.66
Chapter 4.5 --- Bit-serial motion estimation architecture --- p.68
Chapter 4.5.1 --- Data Processing Direction --- p.68
Chapter 4.5.2 --- Algorithm mapping and dataflow design . --- p.68
Chapter 4.5.3 --- Early termination scheme --- p.69
Chapter 4.5.4 --- Top-level architecture --- p.70
Chapter 4.5.5 --- Non redundant positive number to signed digit conversion --- p.71
Chapter 4.5.6 --- Signed-digit adder tree --- p.73
Chapter 4.5.7 --- SAD merger --- p.74
Chapter 4.5.8 --- Signed-digit comparator --- p.75
Chapter 4.5.9 --- Early termination controller --- p.76
Chapter 4.5.10 --- Data scheduling and timeline --- p.80
Chapter 4.6 --- Decision metric in different architectural types . . --- p.80
Chapter 4.6.1 --- Throughput --- p.81
Chapter 4.6.2 --- Memory bandwidth --- p.83
Chapter 4.6.3 --- Silicon area occupied and power consump- tion --- p.83
Chapter 4.7 --- Architecture selection for different applications . . --- p.84
Chapter 4.7.1 --- CIF and QCIF resolution --- p.84
Chapter 4.7.2 --- SDTV resolution --- p.85
Chapter 4.7.3 --- HDTV resolution --- p.85
Chapter 4.8 --- Summary --- p.86
Chapter 5 --- Results and comparison --- p.87
Chapter 5.1 --- Introduction --- p.87
Chapter 5.2 --- Implementation details --- p.87
Chapter 5.2.1 --- Bit-parallel 1-D systolic array --- p.88
Chapter 5.2.2 --- Bit-parallel 2-D systolic array --- p.89
Chapter 5.2.3 --- Bit-parallel Tree architecture --- p.90
Chapter 5.2.4 --- MSB-first bit-serial design --- p.91
Chapter 5.3 --- Comparison between motion estimation architectures --- p.93
Chapter 5.3.1 --- Throughput and latency --- p.93
Chapter 5.3.2 --- Occupied resources --- p.94
Chapter 5.3.3 --- Memory bandwidth --- p.95
Chapter 5.3.4 --- Motion estimation algorithm --- p.95
Chapter 5.3.5 --- Power consumption --- p.97
Chapter 5.4 --- Comparison to ASIC and FPGA architectures in past literature --- p.99
Chapter 5.5 --- Summary --- p.101
Chapter 6 --- Conclusion --- p.102
Chapter 6.1 --- Summary --- p.102
Chapter 6.1.1 --- Algorithmic optimizations --- p.102
Chapter 6.1.2 --- Architecture and arithmetic optimizations --- p.103
Chapter 6.1.3 --- Implementation on a FPGA platform . . . --- p.104
Chapter 6.2 --- Future work --- p.106
Chapter A --- VHDL Sources --- p.108
Chapter A.1 --- Online Full Adder --- p.108
Chapter A.2 --- Online Signed Digit Full Adder --- p.109
Chapter A.3 --- Online Pull Adder Tree --- p.110
Chapter A.4 --- SAD merger --- p.112
Chapter A.5 --- Signed digit adder tree stage (top) --- p.116
Chapter A.6 --- Absolute element --- p.118
Chapter A.7 --- Absolute stage (top) --- p.119
Chapter A.8 --- Online comparator element --- p.120
Chapter A.9 --- Comparator stage (top) --- p.122
Chapter A.10 --- MSB-first motion estimation processor --- p.134
Bibliography --- p.137
Yang, Hau-Yu, and 楊濠宇. "The study of Variable Sample Size Cpm Control Chart." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/4ksa47.
Full textLiu, Han-Sheng, and 劉瀚升. "Parallel VLSI Architectures for Variable Block Size Motion Estimation." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/08983155936817630201.
Full text國立東華大學
資訊工程學系
98
The H.264/AVC video coding standard was recently developed by the Joint Video Team (JVT) consisting of experts from international study groups, Video Coding Experts Group (VCEG) and Moving Picture Experts Group (MPEG), which significantly improves video coding. Motion estimation is one of the core designs of the H.264/AVC video coding. Variable block size motion estimation (VBSME) is a new video coding technique which improves video distortion, provides more accurate predictions, reduces video coding data, and increases the utilization of network bandwidth. This thesis proposes parallel VLSI architectures for VBSME which apply to the full search block matching algorithm (FSBMA). Our proposed architecture use pipelined design to balance the execution time of each stage in order to increase the performance. Furthermore, our design employs parallel architectures to improve the throughput, and facilitate lower computation time. With the pipelined design, the processing elements use hierarchical structures to calculate seven kinds of blocks (4×4, 8×4, 4×8, 8×8, 16×8, 8×16, and 16×16), which have relatively simple circuits and relatively low computation complexity. We use cell-based design with TSMC 0.18 μm CMOS technology to implement our hardware. Our proposed architecture is realized with physical design flow to show its feasibility. Experimental results show that our proposed parallel architectures can increase the performance and reduce the computational complexity compared to other designs.