Tesis sobre el tema "Estimation scalable de l'incertitude"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 37 mejores tesis para su investigación sobre el tema "Estimation scalable de l'incertitude".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Candela, Rosa. "Robust and scalable probabilistic machine learning methods with applications to the airline industry". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS078.
Texto completoIn the airline industry, price prediction plays a significant role both for customers and travel companies. The former are interested in knowing the price evolution to get the cheapest ticket, the latter want to offer attractive tour packages and maximize their revenue margin. In this work we introduce some practical approaches to help travelers in dealing with uncertainty in ticket price evolution and we propose a data-driven framework to monitor time-series forecasting models' performance. Stochastic Gradient Descent (SGD) represents the workhorse optimization method in the field of machine learning and this is true also for distributed systems, which in last years are increasingly used for complex models trained on massive datasets. In asynchronous systems workers can use stale versions of the parameters, which slows SGD convergence. In this thesis we fill the gap in the literature and study sparsification methods in asynchronous settings. We provide a concise convergence rate analysis when the joint effects of sparsification and asynchrony are taken into account, and show that sparsified SGD converges at the same rate of standard SGD. Recently, SGD has played an important role also as a way to perform approximate Bayesian Inference. Stochastic gradient MCMC algorithms use indeed SGD with constant learning rate to obtain samples from the posterior distribution. Despite some promising results restricted to simple models, most of the existing works fall short in easily dealing with the complexity of the loss landscape of deep models. In this thesis we introduce a practical approach to posterior sampling, which requires weaker assumptions than existing algorithms
Rossi, Simone. "Improving Scalability and Inference in Probabilistic Deep Models". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.
Texto completoThroughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Pinson, Pierre. "Estimation de l'incertitude des prédictions de production éolienne". Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002187.
Texto completoLu, Ruijin. "Scalable Estimation and Testing for Complex, High-Dimensional Data". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.
Texto completoDoctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
Blier, Mylène. "Estimation temporelle avec interruption: les effets de localisation et de durée d'interruptions sont-ils sensibles à l'incertitude ?" Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26367/26367.pdf.
Texto completoBlier, Mylène. "Estimation temporelle avec interruption : les effets de localisation et de durée d'interruption sont-ils sensibles à l'incertitude ?" Doctoral thesis, Université Laval, 2009. http://hdl.handle.net/20.500.11794/21201.
Texto completoKang, Seong-Ryong. "Performance analysis and network path characterization for scalable internet streaming". Texas A&M University, 2008. http://hdl.handle.net/1969.1/85912.
Texto completoRahmani, Mahmood. "Urban Travel Time Estimation from Sparse GPS Data : An Efficient and Scalable Approach". Doctoral thesis, KTH, Transportplanering, ekonomi och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167798.
Texto completoQC 20150525
Shriram, Alok Kaur Jasleen. "Efficient techniques for end-to-end bandwidth estimation performance evaluations and scalable deployment /". Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2248.
Texto completoTitle from electronic title page (viewed Jun. 26, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
Simsa, Jiri. "Systematic and Scalable Testing of Concurrent Programs". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/285.
Texto completoMallet, Vivien. "Estimation de l'incertitude et prévision d'ensemble avec un modèle de chimie transport - Application à la simulation numérique de la qualité de l'air". Phd thesis, Ecole des Ponts ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001654.
Texto completoAlrammal, Muath. "Algorithms for XML stream processing : massive data, external memory and scalable performance". Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00779309.
Texto completoSchmidt, Aurora C. "Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/240.
Texto completoSoh, Jeremy. "A scalable, portable, FPGA-based implementation of the Unscented Kalman Filter". Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17286.
Texto completoRaghavan, Venkatesh. "VAMANA -- A high performance, scalable and cost driven XPath engine". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0505104-185545/.
Texto completoWerner, Stéphane. "Optimisation des cadastres d'émissions: estimation des incertitudes, détermination des facteurs d'émissions du "black carbon" issus du trafic routier et estimation de l'influence de l'incertitude des cadastres d'émissions sur la modélisation : application aux cadastres Escompte et Nord-Pas-de-Calais". Strasbourg, 2009. https://publication-theses.unistra.fr/public/theses_doctorat/2009/WERNER_Stephane_2009.pdf.
Texto completoEmissions inventories have a fundamental role in controlling air pollution, both directly by identifying emissions, and as input data for air pollution models. The main objective of this PhD study is to optimize existing emissions inventories, including one from the program ESCOMPTE « Experiments on Site to Constrain Models of Atmospheric Pollution and Transport of Emissions ». For that emissions inventory, two separate issues were developed: one designed to better assess the emissions uncertainties and the second to insert a new compound of interest in this inventory: Black Carbon (BC). Within the first issue, an additional study was conducted on the Nord-Pas-de-Calais emissions inventory to test the methodology of uncertainties calculation. The emissions uncertainties calculated were used to assess their influence on air quality modeling (model CHIMERE). The second part of the research study was dedicated to complement the existing inventory of carbon particulate emissions from road traffic sector by introducing an additional class of compounds: the BC. The BC is the raw carbonaceous atmospheric particles absorbing light. Its main source is the incomplete combustion of carbonaceous fuels and compounds. It can be regarded as a key atmospheric compound given its impact on climate and on health because of its chemical reactivity
Brunner, Manuela. "Hydrogrammes synthétiques par bassin et types d'événements. Estimation, caractérisation, régionalisation et incertitude". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAU003/document.
Texto completoDesign flood estimates are needed in hydraulic design for the construction of dams and retention basins and in flood management for drawing hazard maps or modeling inundation areas. Traditionally, such design floods have been expressed in terms of peak discharge estimated in a univariate flood frequency analysis. However, design or flood management tasks involving storage, in addition to peak discharge, also require information on hydrograph volume, duration, and shape . A bivariate flood frequency analysis allows the joint estimation of peak discharge and hydrograph volume and the consideration of their dependence. While such bivariate design quantiles describe the magnitude of a design flood, they lack information on its shape. An attractive way of modeling the whole shape of a design flood is to express a representative normalized hydrograph shape as a probability density function. The combination of such a probability density function with bivariate design quantiles allows the construction of a synthetic design hydrograph for a certain return period which describes the magnitude of a flood along with its shape. Such synthetic design hydrographs have the potential to be a useful and simple tool in design flood estimation. However, they currently have some limitations. First, they rely on the definition of a bivariate return period which is not uniquely defined. Second, they usually describe the specific behavior of a catchment and do not express process variability represented by different flood types. Third, they are neither available for ungauged catchments nor are they usually provided together with an uncertainty estimate.This thesis therefore explores possibilities for the construction of synthetic design hydrographs in gauged and ungauged catchments and ways of representing process variability in design flood construction. It proposes tools for both catchment- and flood-type specific design hydrograph construction and regionalization and for the assessment of their uncertainty.The thesis shows that synthetic design hydrographs are a flexible tool allowing for the consideration of different flood or event types in design flood estimation. A comparison of different regionalization methods, including spatial, similarity, and proximity based approaches, showed that catchment-specific design hydrographs can be best regionalized to ungauged catchments using linear and nonlinear regression methods. It was further shown that event-type specific design hydrograph sets can be regionalized using a bivariate index flood approach. In such a setting, a functional representation of hydrograph shapes was found to be a useful tool for the delineation of regions with similar flood reactivities.An uncertainty assessment showed that the record length and the choice of the sampling strategy are major uncertainty sources in the construction of synthetic design hydrographs and that this uncertainty propagates through the regionalization process.This thesis highlights that an ensemble-based design flood approach allows for the consideration of different flood types and runoff processes. This is a step from flood frequency statistics to flood frequency hydrology which allows better-informed decision making
Biletska, Krystyna. "Estimation en temps réel des flux origines-destinations dans un carrefour à feux par fusion de données multicapteurs". Compiègne, 2010. http://www.theses.fr/2010COMP1893.
Texto completoThe quality of the information about origins and destinations (OD) of vehicles in a junction influences the performance of many road transport systems. The period of its update determines the temporal scale of working of these systems. We are interested in the problem of reconstituting of the OD of vehicles crossing a junction, at each traffic light cycle, using the traffic light states and traffic measurements from video sensors. Traffic measurements, provided every second, are the vehicle counts made on each entrance and exit of the junction and the number of vehicles stopped at each inner section of the junction. Thses real date are subject to imperfections. The only existent method, named ORIDI, which is capable of resolving this problem doesn’t take into account the data imperfection. We propose a new method modelling the date imprecision by the theory of fuzzy subsets. It can be applied to any type of junction and is independent of the type of traffic light strategy. The method estimates OD flows from the vehicle conservation law represented by an underdetermined system of equations constructed in a dynamic way at each traffic light cycle using to the fuzzy a-timed Petri nets. A unique solution is found thanks to eight different methods which introduce estimate in the form of point, interval or fuzzy set. Our study shows that the crisp methods are accurate like ORIDI, but more robust when one of the video sensors is broken down. The interval and fuzzy methods, being less accurate than ORIDI, try to guarantee that the solution includes the true value
Fissore, Giancarlo. "Generative modeling : statistical physics of Restricted Boltzmann Machines, learning with missing information and scalable training of Linear Flows". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG028.
Texto completoNeural network models able to approximate and sample high-dimensional probability distributions are known as generative models. In recent years this class of models has received tremendous attention due to their potential in automatically learning meaningful representations of the vast amount of data that we produce and consume daily. This thesis presents theoretical and algorithmic results pertaining to generative models and it is divided in two parts. In the first part, we focus our attention on the Restricted Boltzmann Machine (RBM) and its statistical physics formulation. Historically, statistical physics has played a central role in studying the theoretical foundations and providing inspiration for neural network models. The first neural implementation of an associative memory (Hopfield, 1982) is a seminal work in this context. The RBM can be regarded to as a development of the Hopfield model, and it is of particular interest due to its role at the forefront of the deep learning revolution (Hinton et al. 2006).Exploiting its statistical physics formulation, we derive a mean-field theory of the RBM that let us characterize both its functioning as a generative model and the dynamics of its training procedure. This analysis proves useful in deriving a robust mean-field imputation strategy that makes it possible to use the RBM to learn empirical distributions in the challenging case in which the dataset to model is only partially observed and presents high percentages of missing information. In the second part we consider a class of generative models known as Normalizing Flows (NF), whose distinguishing feature is the ability to model complex high-dimensional distributions by employing invertible transformations of a simple tractable distribution. The invertibility of the transformation allows to express the probability density through a change of variables whose optimization by Maximum Likelihood (ML) is rather straightforward but computationally expensive. The common practice is to impose architectural constraints on the class of transformations used for NF, in order to make the ML optimization efficient. Proceeding from geometrical considerations, we propose a stochastic gradient descent optimization algorithm that exploits the matrix structure of fully connected neural networks without imposing any constraints on their structure other then the fixed dimensionality required by invertibility. This algorithm is computationally efficient and can scale to very high dimensional datasets. We demonstrate its effectiveness in training a multylayer nonlinear architecture employing fully connected layers
Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.
Texto completoPrognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
Zhang, Hongwei. "Dependable messaging in wireless sensor networks". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155607973.
Texto completoFriedman, Timur. "Scalable estimation of multicast session characteristics". 2002. https://scholarworks.umass.edu/dissertations/AAI3056225.
Texto completoChou, Kao-Peng y 周高鵬. "Disintegrated Channel Estimation in Scalable Filter-and-Forward Relay Networks". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/52ub2d.
Texto completo國立中央大學
通訊工程學系
105
Cooperative communication, which has attracted the attention of researchers in recent years, enables the efficient use of resources in mobile communication systems. The research of cooperative communication begin with relay generated multi-link transmission. From the simplest amplify-and-forward to the most complicated decode-and-forward, relay serves a role of extending the coverage ratio for wireless signal in a practical manner. Deploying a single relay or series connected relays is popular because of its simplicity. Conversely, employing parallel relays and space time coding is referred to as distributed space time coding (D-STC) can obtain the advantage of spatial diversity. In this research a disintegrated channel estimation technique is proposed to accomplish the spatial diversity that is supported by cooperative relays. The relaying strategy that is considered in this research is a filter-and-forward (FF) relaying method with superimposed training sequences to estimate backhaul and access channels separately. To reduce inter-relay interference, a generalized filtering technique is proposed and investigated. Unlike the interference suppression method that is commonly employed in conventional FF relay networks, a generalized filter multiplexes the superimposed training sequences from different relays to the destination by time-division multiplexing (TDM), frequency-division multiplexing (FDM) and code-division multiplexing (CDM) methods. The theoretical mean square errors (MSEs) of disintegrated channel estimation is derived and match to the simulation results. The Bayesian Cramer-Rao lower bounds (BCRBs) are derived as the estimation performance benchmark. The improvements offered by the proposed technique are verified by comprehensive computer simulation in conjunction with calculations of the derived BCRBs and the MSEs.
Hsu, Mei-Yun y 許美雲. "Scalable Module-Based Architecture for MPEG-4 BMA Motion Estimation". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/28766556141470208247.
Texto completo國立臺灣大學
電機工程學研究所
88
In this paper, we present a scalable module-based architecture for block matching motion estimation algorithm of MPEG-4. The basic module is one set of processing elements based on one-dimensional systolic array architecture. To support various applications, different modules of processing elements can be configured to form the processing element array to meet the requirements, such as variable block size, search range and computation power. And this proposed architecture has the advantage of few I/O port counts. Based on eliminating unnecessary signal transitions in the processing element, power dissipation of datapath can be reduced to about half without decreasing the picture quality. In addition, for data-dominant video applications like motion estimation, the power consumption is also influenced significantly by the architecture of memory. In order to reduce the power consumption due to the memory accesses of external memory, we propose four schemes of memory hierarchy according to different levels of data reuse. The evaluations of all schemes are parameterized, and designers can easily derive a better scheme under reasonable hardware resources and power consumption. Considering of system integration, the influence of I/O bandwidth between motion estimation unit and system bus is also discussed in the paper.
Lee, Chien-tai y 李建泰. "Utilize bandwidth estimation and scalable video coding to improve IPTV performance". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/19772935081286056642.
Texto completo國立臺灣科技大學
電機工程系
100
With the advance of multimedia technologies and Internet prevalence, multimedia has become one of major information communication tools. To provide network multimedia services, both Internet Protocal TV (IPTV) and Peer-to-Peer (P2P) network control are the most important technologies. The Internet multimedia has evolved from voice communications to high definition (HD) video communications. The Open IPTV Forum, OIPF, is aiming to propose the IPTV system standard, which proclaims the potential of IPTV-related technologies. In this thesis, we proposed to adjust and refine the IPTV system, which comprises media codec systems, streaming control and H.264 encoder and rate control units, to satisfy the application requirement of Internet multimedia. The Content Delivery Network (CDN) and P2P networks are integrated to provide the P2P-IPTV service. For live video streaming, we proposed to utilize both scalable video rate control (SVRC) and network traffic monitor (NTM) for better services, in which the latter helps to provide feedback control that takes peer bandwidth capacity, network connection information, and delay parameters, to dynamically adjust the bit-rate of the video encoder. The bandwith estimation method is developed to solve the bottleneck of insufficient bandwidth in a sharing network environment. To improve the reliability of video transmission quality, the SVRC module and NTM method are designed to operate under the best bandwith utilization of IPTV system. Compared to previous researches, our experiments show that the proposed bandwith estimation as a feedback control for IPTV control can effectively reduce the transmission delay, and improve the stability of transmitted video quality.
Li, Yao y 李曜. "A Quality Scalable H.264/AVC Fractional Motion Estimation IP Design and Implementation". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/89804086435993035420.
Texto completo國立中正大學
資訊工程所
96
This thesis presents a quality scalable fractional motion estimation (QS-FME) IP design for H.264/AVC video coding application. The proposed design is based on the algorithm of QS-FME which supports 3 modes, including full mode, reduced mode and single mode, with different computational complexity. Compare to full mode, the single mode can reduce 90% computation complexity and suite for portable devices. Full mode can achieve average PSNR drop in 0.007 dB. In some cases, the full mode even has better compression quality than the algorithm in JM9.3. About the reduced mode, it can achieve real-time encoding of HD720 sequences. In order to enhance the application of portable devices, we also developed a cost-down customized QS-FME, named Light QS-FME. According to CCU 0.13um CMOS technology, the proposed design of QS-FME and Light QS-FME costs 180232 and 59394 gates as well as 27.264 Kbits/87.936 Kbits local memory for search ranges [-16, +15] and [-40, +39] respectively. The maximum operation frequencies are all 150 MHz and can achieve real-time motion estimation on QCIF, CIF, SDTV (720 x 480), and HD720 (1280 x 720) video sequences.
Chien, Wen-Hsien y 錢文賢. "Design and Implementation Scalable of VLSI Architecture for Variable Block Size Motion Estimation". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/11917898479588814560.
Texto completoTsao, Ko-Chia y 曹克嘉. "Motion estimation design for H.264/MPEG4-AVC video coding and its scalable extension". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/15411133728236860335.
Texto completo國立交通大學
電子研究所
100
Motion estimation is (ME) is the most complex part and the bottle neck of a real time video encoder. The adoption of inter-layer prediction (IL prediction) in H.264/AVC SVC extension even increases the computing time and memory bandwidth of ME. Thus, we adopted the previous data efficient inter-layer prediction algorithm [4] to save the memory bandwidth. In this thesis, we propose the corresponding hardware architecture for inter-layer prediction which can process INTER mode and different inter-layer prediction modes in parallel to save the computing time and memory bandwidth. Furthermore, in order to reduce the high complexity and computation of FME, we adopt the Single-Pass Fractional Motion Estimation (SPFME) as our fast FME algorithm in our FME process. We then propose the corresponding FME hardware architecture for SPFME according to the previous architecture of FME design [3]. Compared with the previous architecture, our proposed architecture can speed up to four times faster. There are many prediction modes due to the adoption of inter-layer prediction and different block types. Thus, to further reduce the complexity and computing time of FME, we adopt the pre-selection algorithm of Li’s to eliminate some prediction modes from FME process. However, the Parallel Multi-Resolution Motion Estimation (PMRME) algorithm [1] is adopted in our IME process. Hence, we further propose a multi-level mode filtering scheme to select 3 prediction modes from 3 different search levels. Finally, we integrate the adopted IL prediction, mode filtering, and the SPFME algorithm. The simulation results shows that the proposed function flow with mode filtering can achieve average 3.542% of bit-rate increment and 0.106dB of PSNR degradation in CIF sequence for 2 spatial layers. The implementation results of the whole ME architecture is also shown. It can support CIF+480p+1080p video @60 fps under 135MHz.
Wang, Te-Heng y 王特亨. "Fast Priority-Based Mode Decision and Activity-Based Motion Estimation for Scalable Video Coding". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/98022885907305615825.
Texto completo國立東華大學
電子工程研究所
98
In H.264/AVC scalable video coding (SVC), the multi-layer motion estimation between different layers achieves spatial scalability but accompanies with high coding complexity. To accelerate the encoding time in SVC, a priority-based mode decision and an activity-based motion estimation are proposed in this thesis. In the proposed mode decision, the rate-distortion costs of the base layer are used to decide the priority of the mode in enhancement layer. The activity-based motion estimation employs motion vector difference in base layer to decrease search range. We also propose a computation-scalable algorithm to provide the quality scalability according to the allocated computation power. Through the proposed algorithms, the computation complexity come from the enhancement layer could be efficiently reduced. Compared with JSVM, the experimental results demonstrate that 72% to 81% time saving is achieved with negligible 0.05 PSNR decrease and only 1.9% bitrate increase.
Wang, Xiaoming. "Robust and Scalable Sampling Algorithms for Network Measurement". 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2391.
Texto completoMir, arabbaygi Siavash. "Novel scalable approaches for multiple sequence alignment and phylogenomic reconstruction". Thesis, 2015. http://hdl.handle.net/2152/31377.
Texto completoSyu, Jhe-wei y 許哲維. "Fast Inter-Layer Motion Estimation Algorithm on Spatial Scalability in H.264/AVC Scalable Extension". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/52s9c2.
Texto completo國立中央大學
通訊工程研究所
97
With the improvements of video coding technology, network infrastructures, storage capacity, and CPU computing capability, the applications of multimedia systems become wider and more popular. Therefore, how to efficiently provide video sequences to users under different constraints is very important, and scalable video coding is one of the best solutions to this problem. H.264 scalable extension (SVC) that is constructed based on H.264/AVC is the most recent scalable video coding standard. SVC utilizes the inter-layer prediction to substantially improve the coding efficiency comparing with the prior scalable video coding standards. Nevertheless, this technique results in extremely large computation complexity which obstructs it from practical use. Especially on spatial scalability, the complexity of the enhancement layer motion estimation occupies above 90% of the total complexity. The main objective of this work is to reduce the computation complexity while maintaining both the video quality and the bit-rate. This thesis proposes a fast inter-layer motion estimation algorithm on temporal and spatial scalabilities for SVC. We utilize the relation between two motion vector predictors from the base layer as well as the enhancement layer respectively and the correlation between all the modes to reduce the number of search times. The simulation results show that the proposed algorithm can save the computation complexity up to 67.4% compared to JSVM9.12 with less than 0.0476dB video quality degradation.
Veerapandian, Lakshmi. "A spatial scalable video coding with selective data transmission using wavelet decomposition". Thesis, 2010. http://hdl.handle.net/2440/61956.
Texto completoThesis (M.Eng.Sc.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2010
Kao, Hsiang-Chieh y 高祥桔. "Fast Motion Estimation Algorithm and Its Architecture Analysis for H.264/AVC Scalable Extension IP Design". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/77242030713406273263.
Texto completo雲林科技大學
電子與資訊工程研究所
98
In the past few years, wireless communications with varied bandwidth have achieved innovation in the video compression technology and driven a developing standard for scalable video coding(SVC). H.264/AVC scalable extension extends from H.264/AVC for SVC. As comparing to H.264/AVC, H.264/AVC scalable extension adds three features which provide temporal, spatial and signal-to-noise ration (SNR) scalabilities. These not only allow more flexible than H.264/AVC but also cause huge encoding complexity especially for the motion vector (MV) searching. The proposed motion estimation (ME) process is extended from spatial scalability. It utilizes the property of the up-sampled residual in base layer (BL) for normal motion estimation (NME) selection or for the inter-layer residual motion estimation (ILRME) in enhancement layer (EL). The proposed ME process is applied only on NME or ILRME. Consequently, a half of the software computation or the hardware operation was saved. It combines a low memory bandwidth, low complexity, and high quality supporting integer motion estimation (IME) algorithm by group of macroblock (GOMB) and adaptive search range (ASR). Compared to paper submitted from Nation Taiwan University- Electrical Engineering Institute (NTU-EE) in 2005, the proposed architechture can save between 35% and 40% of external memory bandwidth, and up to 85% of internal memory is reduced, as for video quality, the proposed algorithms loss PSNR by 0.1 dB in average on HD (1280×720) encoding.
Maharaju, Rajkumar. "Devolopment of Mean and Median Based Adaptive Search Algorithm for Motion Estimation in SNR Scalable Video Coding". Thesis, 2015. http://ethesis.nitrkl.ac.in/7725/1/2015_MT_Development_Rajkumar_Maharaju.pdf.
Texto completoLi, Gwo-Long y 李國龍. "The Study of Bandwidth Efficient Motion Estimation for H.264/MPEG4-AVC Video Coding and Its Scalable Extension". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/74663865189997103364.
Texto completo國立交通大學
電子研究所
100
In the video coding system, the overall system performance is dominated by the motion estimation module due to its high computational complexity and memory bandwidth intensive data accesses. Furthermore, with the increasing demands of high definition TV, the system performance drop caused by the intensive data bandwidth access requirement becomes even more significant. In addition, the additional adopted Inter-layer prediction modes of scalable video coding also significant increase the data access bandwidth overhead and computational complexity. To solve the high computation complexity and intensive data bandwidth access problems, this dissertation proposes several data access bandwidth and computational complexity reduction algorithms for both of integer and fractional motion estimation. First, this dissertation proposes a rate distortion bandwidth efficient motion estimation algorithm to reduce the data bandwidth requirements in integer motion estimation. In this algorithm, a mathematical model is proposed to describe the relationship between rate distortion cost and data bandwidth. Through the modeling results, a data bandwidth efficient motion estimation algorithm is thus proposed. In addition, a bandwidth aware motion estimation algorithm based on the modeling results is also proposed to efficiently allocate the data bandwidth for motion estimation under the available bandwidth constraint. Simulation results show that our proposed algorithm can achieve 78.82% data bandwidth saving. In scalable video coding standard, the additional included Inter-layer prediction modes significantly deteriorate the video system coding performance since much more data have to be accessed for the prediction purpose. Therefore, this dissertation proposes several data efficient Inter-layer prediction algorithms to lighten the intensive data bandwidth requirement problem in scalable video coding. By observing the relationship between spatial layers, several data reusing algorithms have been proposed and thus achieve more data bandwidth requirement reduction. Simulation results demonstrate that our proposed algorithm can achieve 50.55% data bandwidth reduction at least. In addition to the system performance degradation caused by intensive data bandwidth access problem, the high computational complexity of fractional motion estimation also noticeably increases the system performance drop in scalable video coding. Therefore, this dissertation proposes a mode pre-selection algorithm for fractional motion estimation in scalable video coding. In our proposed algorithm, the rate distortion cost relationship between different prediction modes are observed and analyzed first. Based on the observing and analytical results, several mode pre-selection rules are proposed to filter out the potentially skippable prediction modes. Simulation results provide that our proposed mode pre-selection algorithm can reduce 65.97% prediction modes with ignorable rate distortion performance degradation. Finally, for the video coding system performance drop problem caused by the fractional motion estimation process skipping due to hardware implementation consideration, this dissertation proposes a search range adjust algorithm to adjust the search range for the motion estimation so that the new decided search range can cover the absent reference data as much as possible for fractional motion estimation. By mathematically modeling the relationship between motion vector predictor and non-overlapping area size, the new search range can thus be adjusted. In addition, a search range aspect ratio adjust algorithm is also proposed in this dissertation by means of solving the mathematical equations. Through the proposed search range adjust algorithm, up to 90.56% of bitrate increasing can be reduced when compared to fractional motion estimation skipping mechanism. Furthermore, the proposed search range aspect ratio adjust algorithm can achieve better rate distortion performance when compared to the exhaustive search method under the same search range area constraint. In summary, through the algorithms proposed in this dissertation, not only the data access bandwidth but the computational complexity of integer and fractional motion estimation can be reduced and thus improve the overall video coding system performance significantly.
Isaac, Tobin Gregory. "Scalable, adaptive methods for forward and inverse problems in continental-scale ice sheet modeling". Thesis, 2015. http://hdl.handle.net/2152/31372.
Texto completo