Siga este enlace para ver otros tipos de publicaciones sobre el tema: Estimation scalable de l'incertitude.

Tesis sobre el tema "Estimation scalable de l'incertitude"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 37 mejores tesis para su investigación sobre el tema "Estimation scalable de l'incertitude".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Candela, Rosa. "Robust and scalable probabilistic machine learning methods with applications to the airline industry". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS078.

Texto completo
Resumen
Dans le secteur aérien, la prévision des prix joue un rôle important tant pour les clients que pour les agences de voyage. Dans cette thèse, nous présentons quelques approches pratiques pour aider les voyageurs à faire face à l'incertitude dans l'évolution du prix des billets et nous proposons un cadre basé sur des données pour surveiller les performances des modèles de prévision de séries chronologiques. La descente de gradient stochastique (SGD) représente la méthode d'optimisation plus utilisée dans le domaine de l'apprentissage automatique et cela est également vrai pour les systèmes distribués, qui ces dernières années sont de plus en plus utilisés pour des modèles complexes formés sur des ensembles de données massifs. Dans les systèmes asynchrones, les travailleurs peuvent utiliser des versions obsolètes des paramètres, ce qui ralentit la convergence SGD. Dans cette thèse, nous fournissons une analyse concise du taux de convergence lorsque les effets conjoints de la sparsification et de l'asynchronie sont pris en compte, et montrons que le SGD clairsemé converge au même taux que le SGD standard. Récemment, SGD a également joué un rôle important en tant que moyen d'effectuer une inférence bayésienne approximative. Les algorithmes MCMC à gradient stochastique utilisent SGD avec un taux d'apprentissage constant pour obtenir des échantillons à partir de la distribution postérieure. Dans cette thèse, nous introduisons une approche pratique de l'échantillonnage postérieur, qui nécessite des hypothèses plus faibles que les algorithmes existants
In the airline industry, price prediction plays a significant role both for customers and travel companies. The former are interested in knowing the price evolution to get the cheapest ticket, the latter want to offer attractive tour packages and maximize their revenue margin. In this work we introduce some practical approaches to help travelers in dealing with uncertainty in ticket price evolution and we propose a data-driven framework to monitor time-series forecasting models' performance. Stochastic Gradient Descent (SGD) represents the workhorse optimization method in the field of machine learning and this is true also for distributed systems, which in last years are increasingly used for complex models trained on massive datasets. In asynchronous systems workers can use stale versions of the parameters, which slows SGD convergence. In this thesis we fill the gap in the literature and study sparsification methods in asynchronous settings. We provide a concise convergence rate analysis when the joint effects of sparsification and asynchrony are taken into account, and show that sparsified SGD converges at the same rate of standard SGD. Recently, SGD has played an important role also as a way to perform approximate Bayesian Inference. Stochastic gradient MCMC algorithms use indeed SGD with constant learning rate to obtain samples from the posterior distribution. Despite some promising results restricted to simple models, most of the existing works fall short in easily dealing with the complexity of the loss landscape of deep models. In this thesis we introduce a practical approach to posterior sampling, which requires weaker assumptions than existing algorithms
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rossi, Simone. "Improving Scalability and Inference in Probabilistic Deep Models". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS042.

Texto completo
Resumen
Au cours de la dernière décennie, l'apprentissage profond a atteint un niveau de maturité suffisant pour devenir le choix privilégié pour résoudre les problèmes liés à l'apprentissage automatique ou pour aider les processus de prise de décision.En même temps, l'apprentissage profond n'a généralement pas la capacité de quantifier avec précision l'incertitude de ses prédictions, ce qui rend ces modèles moins adaptés aux applications critiques en matière de risque.Une solution possible pour résoudre ce problème est d'utiliser une formulation bayésienne ; cependant, bien que cette solution soit élégante, elle est analytiquement difficile à mettre en œuvre et nécessite des approximations. Malgré les énormes progrès réalisés au cours des dernières années, il reste encore beaucoup de chemin à parcourir pour rendre ces approches largement applicables. Dans cette thèse, nous adressons certains des défis de l'apprentissage profond bayésien moderne, en proposant et en étudiant des solutions pour améliorer la scalabilité et l'inférence de ces modèles.La première partie de la thèse est consacrée aux modèles profonds où l'inférence est effectuée en utilisant l'inférence variationnelle (VI).Plus précisément, nous étudions le rôle de l'initialisation des paramètres variationnels et nous montrons comment des stratégies d'initialisation prudentes peuvent permettre à l'inférence variationnelle de fournir de bonnes performances même dans des modèles à grande échelle.Dans cette partie de la thèse, nous étudions également l'effet de sur-régularisation de l'objectif variationnel sur les modèles sur-paramétrés.Pour résoudre ce problème, nous proposons une nouvelle paramétrisation basée sur la transformée de Walsh-Hadamard ; non seulement cela résout l'effet de sur-régularisation de l'objectif variationnel mais cela nous permet également de modéliser des postérités non factorisées tout en gardant la complexité temporelle et spatiale sous contrôle.La deuxième partie de la thèse est consacrée à une étude sur le rôle des prieurs.Bien qu'étant un élément essentiel de la règle de Bayes, il est généralement difficile de choisir de bonnes prieurs pour les modèles d'apprentissage profond.Pour cette raison, nous proposons deux stratégies différentes basées (i) sur l'interprétation fonctionnelle des réseaux de neurones et (ii) sur une procédure évolutive pour effectuer une sélection de modèle sur les hyper-paramètres antérieurs, semblable à la maximisation de la vraisemblance marginale.Pour conclure cette partie, nous analysons un autre type de modèle bayésien (processus Gaussien) et nous étudions l'effet de l'application d'un a priori sur tous les hyperparamètres de ces modèles, y compris les variables supplémentaires requises par les approximations du inducing points.Nous montrons également comment il est possible d'inférer des a posteriori de forme libre sur ces variables, qui, par convention, auraient été autrement estimées par point
Throughout the last decade, deep learning has reached a sufficient level of maturity to become the preferred choice to solve machine learning-related problems or to aid decision making processes.At the same time, deep learning is generally not equipped with the ability to accurately quantify the uncertainty of its predictions, thus making these models less suitable for risk-critical applications.A possible solution to address this problem is to employ a Bayesian formulation; however, while this offers an elegant treatment, it is analytically intractable and it requires approximations.Despite the huge advancements in the last few years, there is still a long way to make these approaches widely applicable.In this thesis, we address some of the challenges for modern Bayesian deep learning, by proposing and studying solutions to improve scalability and inference of these models.The first part of the thesis is dedicated to deep models where inference is carried out using variational inference (VI).Specifically, we study the role of initialization of the variational parameters and we show how careful initialization strategies can make VI deliver good performance even in large scale models.In this part of the thesis we also study the over-regularization effect of the variational objective on over-parametrized models.To tackle this problem, we propose an novel parameterization based on the Walsh-Hadamard transform; not only this solves the over-regularization effect of VI but it also allows us to model non-factorized posteriors while keeping time and space complexity under control.The second part of the thesis is dedicated to a study on the role of priors.While being an essential building block of Bayes' rule, picking good priors for deep learning models is generally hard.For this reason, we propose two different strategies based (i) on the functional interpretation of neural networks and (ii) on a scalable procedure to perform model selection on the prior hyper-parameters, akin to maximization of the marginal likelihood.To conclude this part, we analyze a different kind of Bayesian model (Gaussian process) and we study the effect of placing a prior on all the hyper-parameters of these models, including the additional variables required by the inducing-point approximations.We also show how it is possible to infer free-form posteriors on these variables, which conventionally would have been otherwise point-estimated
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Pinson, Pierre. "Estimation de l'incertitude des prédictions de production éolienne". Phd thesis, École Nationale Supérieure des Mines de Paris, 2006. http://pastel.archives-ouvertes.fr/pastel-00002187.

Texto completo
Resumen
L'énergie éolienne connaît un développement considérable en Europe. Pourtant, le caractère intermittent de cette énergie renouvelable introduit des difficultés pour la gestion du réseau électrique. De plus, dans le cadre de la dérégulation des marchés de l'électricité, l'énergie éolienne est pénalisée par rapport aux moyens de production contrôlables. La prédiction de la production éolienne à des horizons de 2-3 jours aide l'intégration de cette énergie. Ces prédictions consistent en une seule valeur par horizon, qui correspond à la production la plus probable. Cette information n'est pas suffisante pour définir des stratégies de commerce ou de gestion optimales. C'est pour cela que notre travail se concentre sur l'incertitude des prédictions éoliennes. Les caractéristiques de cette incertitude sont décrites à travers une analyse des performances de certains modèles de l'état de l'art, et en soulignant l'influence de certaines variables sur les moments des distributions d'erreurs de prédiction. Ensuite, nous décrivons une méthode générique pour l'estimation d'intervalles de prédiction. Il s'agit d'une méthode statistique nonparamétrique qui utilise des concepts de logique floue pour intégrer l'expertise acquise concernant les caractéristiques de cette incertitude. En estimant plusieurs intervalles à la fois, on obtient alors des prédictions probabilistes sous forme de densité de probabilité de production éolienne pour chaque horizon. La méthode est évaluée en terme de fiabilité, finesse et résolution. En parallèle, nous explorons la possibilité d'utiliser des prédictions ensemblistes pour fournir des 'prévisions d'erreur'. Ces prédictions ensemblistes sont obtenues soit en convertissant des prévisions météorologiques ensemblistes (fournies par ECMWF ou NCEP), soit en appliquant une approche de décalage temporel. Nous proposons une définition d'indices de risque, qui reflètent la dispersion des ensembles pour un ou plusieurs horizons consécutifs. Une relation probabiliste entre ces indices de risque et le niveau d'erreur de prédiction est établie. Dans une dernière partie, nous considérons la participation de l'énergie éolienne dans les marchés de l'électricité afin de démontrer la valeur de l'information 'incertitude'. Nous expliquons comment définir des stratégies de participation à ces bourses de l'électricité avec des prédictions déterministes ou probabilistes. Les bénéfices résultant d'une estimation de l'incertitude des prédictions éoliennes sont clairement démontrés.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lu, Ruijin. "Scalable Estimation and Testing for Complex, High-Dimensional Data". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/93223.

Texto completo
Resumen
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time.
Doctor of Philosophy
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Blier, Mylène. "Estimation temporelle avec interruption: les effets de localisation et de durée d'interruptions sont-ils sensibles à l'incertitude ?" Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26367/26367.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Blier, Mylène. "Estimation temporelle avec interruption : les effets de localisation et de durée d'interruption sont-ils sensibles à l'incertitude ?" Doctoral thesis, Université Laval, 2009. http://hdl.handle.net/20.500.11794/21201.

Texto completo
Resumen
Lors d'études portant sur la production d'intervalles temporels avec interruption, deux principaux effets sont généralement retrouvés lorsque la localisation et la durée de l'interruption varient: les productions temporelles sont plus courtes 1) lorsque l'interruption arrive tôt dans l'intervalle de temps et 2) lorsque la durée de l'interruption est plus longue. L'effet de localisation s'explique principalement par un partage attentionnel lors de la période qui précède l'interruption, mais peut aussi refléter en partie l'effet de processus préparatoires précédant l'interruption. L'effet de durée d'interruption est expliqué par les processus de préparation qui prennent place durant l'interruption. Une façon de réduire les effets préparatoires est d'augmenter l'incertitude quant au moment d'arrivée du signal auquel on doit réagir. L'objectif principal du mémoire doctoral est de tester la sensibilité des effets de localisation et de durée d'interruption à l'incertitude dans trois expériences. Les deux premières expériences portent sur la production d'intervalle temporelle avec interruption. La dernière expérience vérifie l'effet de l'incertitude sur la préparation dans un paradigme de temps de réaction. Pour chaque expérience, deux conditions sont utilisées, soient un groupe où l'incertitude est faible et un autre où l'incertitude est élevée. Dans la première expérience, l'effet de localisation est plus fort dans le groupe où l'incertitude est faible. Aucun effet significatif n'est retrouvé dans l'expérience où la durée de l'interruption est manipulée, ce qui est expliqué principalement par la taille de l'effet de durée d'interruption. Dans la tâche de temps de réaction, l'effet de la période préparatoire est plus prononcé dans le groupe où l'incertitude est faible contrairement au groupe où l'incertitude est élevée. Les résultats permettent de conclure que l'incertitude affecte le partage attentionnel dans l'effet de localisation, mais affecte surtout l'effet de préparation qui serait impliqué dans les trois expériences. Ces résultats s'expliquent du fait que les participants montrent une meilleure préparation lorsque l'incertitude est faible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kang, Seong-Ryong. "Performance analysis and network path characterization for scalable internet streaming". Texas A&M University, 2008. http://hdl.handle.net/1969.1/85912.

Texto completo
Resumen
Delivering high-quality of video to end users over the best-effort Internet is a challenging task since quality of streaming video is highly subject to network conditions. A fundamental issue in this area is how real-time applications cope with network dynamics and adapt their operational behavior to offer a favorable streaming environment to end users. As an effort towards providing such streaming environment, the first half of this work focuses on analyzing the performance of video streaming in best-effort networks and developing a new streaming framework that effectively utilizes unequal importance of video packets in rate control and achieves a near-optimal performance for a given network packet loss rate. In addition, we study error concealment methods such as FEC (Forward-Error Correction) that is often used to protect multimedia data over lossy network channels. We investigate the impact of FEC on the quality of video and develop models that can provide insights into understanding how inclusion of FEC affects streaming performance and its optimality and resilience characteristics under dynamically changing network conditions. In the second part of this thesis, we focus on measuring bandwidth of network paths, which plays an important role in characterizing Internet paths and can benefit many applications including multimedia streaming. We conduct a stochastic analysis of an end-to-end path and develop novel bandwidth sampling techniques that can produce asymptotically accurate capacity and available bandwidth of the path under non-trivial cross-traffic conditions. In addition, we conduct comparative performance study of existing bandwidth estimation tools in non-simulated networks where various timing irregularities affect delay measurements. We find that when high-precision packet timing is not available due to hardware interrupt moderation, the majority of existing algorithms are not robust to measure end-to-end paths with high accuracy. We overcome this problem by using signal de-noising techniques in bandwidth measurement. We also develop a new measurement tool called PRC-MT based on theoretical models that simultaneously measures the capacity and available bandwidth of the tight link with asymptotic accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Rahmani, Mahmood. "Urban Travel Time Estimation from Sparse GPS Data : An Efficient and Scalable Approach". Doctoral thesis, KTH, Transportplanering, ekonomi och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167798.

Texto completo
Resumen
The use of GPS probes in traffic management is growing rapidly as the required data collection infrastructure is increasingly in place, with significant number of mobile sensors moving around covering expansive areas of the road network. Many travelers carry with them at least one device with a built-in GPS receiver. Furthermore, vehicles are becoming more and more location aware. Vehicles in commercial fleets are now routinely equipped with GPS. Travel time is important information for various actors of a transport system, ranging from city planning, to day to day traffic management, to individual travelers. They all make decisions based on average travel time or variability of travel time among other factors. AVI (Automatic Vehicle Identification) systems have been commonly used for collecting point-to-point travel time data. Floating car data (FCD) -timestamped locations of moving vehicles- have shown potential for travel time estimation. Some advantages of FCD compared to stationary AVI systems are that they have no single point of failure and they have better network coverage. Furthermore, the availability of opportunistic sensors, such as GPS, makes the data collection infrastructure relatively convenient to deploy. Currently, systems that collect FCD are designed to transmit data in a limited form and relatively infrequently due to the cost of data transmission. Thus, reported locations are far apart in time and space, for example with 2 minutes gaps. For sparse FCD to be useful for transport applications, it is required that the corresponding probes be matched to the underlying digital road network. Matching such data to the network is challenging. This thesis makes the following contributions: (i) a map-matching and path inference algorithm, (ii) a method for route travel time estimation, (iii) a fixed point approach for joint path inference and travel time estimation, and (iv) a method for fusion of FCD with data from automatic number plate recognition. In all methods, scalability and overall computational efficiency are considered among design requirements. Throughout the thesis, the methods are used to process FCD from 1500 taxis in Stockholm City. Prior to this work, the data had been ignored because of its low frequency and minimal information. The proposed methods proved that the data can be processed and transformed into useful traffic information. Finally, the thesis implements the main components of an experimental ITS laboratory, called iMobility Lab. It is designed to explore GPS and other emerging data sources for traffic monitoring and control. Processes are developed to be computationally efficient, scalable, and to support real time applications with large data sets through a proposed distributed implementation.

QC 20150525

Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Shriram, Alok Kaur Jasleen. "Efficient techniques for end-to-end bandwidth estimation performance evaluations and scalable deployment /". Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2248.

Texto completo
Resumen
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Jun. 26, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Simsa, Jiri. "Systematic and Scalable Testing of Concurrent Programs". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/285.

Texto completo
Resumen
The challenge this thesis addresses is to speed up the development of concurrent programs by increasing the efficiency with which concurrent programs can be tested and consequently evolved. The goal of this thesis is to generate methods and tools that help software engineers increase confidence in the correct operation of their programs. To achieve this goal, this thesis advocates testing of concurrent software using a systematic approach capable of enumerating possible executions of a concurrent program. The practicality of the systematic testing approach is demonstrated by presenting a novel software infrastructure that repeatedly executes a program test, controlling the order in which concurrent events happen so that different behaviors can be explored across different test executions. By doing so, systematic testing circumvents the limitations of traditional ad-hoc testing, which relies on chance to discover concurrency errors. However, the idea of systematic testing alone does not quite solve the problem of concurrent software testing. The combinatorial nature of the number of ways in which concurrent events of a program can execute causes an explosion of the number of possible interleavings of these events, a problem referred to as state space explosion. To address the state space explosion problem, this thesis studies techniques for quantifying the extent of state space explosion and explores several directions for mitigating state space explosion: parallel state space exploration, restricted runtime scheduling, and abstraction reduction. In the course of its research exploration, this thesis pushes the practical limits of systematic testing by orders of magnitude, scaling systematic testing to real-world programs of unprecedented complexity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Mallet, Vivien. "Estimation de l'incertitude et prévision d'ensemble avec un modèle de chimie transport - Application à la simulation numérique de la qualité de l'air". Phd thesis, Ecole des Ponts ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001654.

Texto completo
Resumen
La thèse s'attache à évaluer la qualité d'un modèle de chimie-transport, non pas par une comparaison classique aux observations, mais en estimant ses incertitudes a priori dues aux données d'entrées, à la formulation du modèle et aux approximations numériques. L'étude de ces trois sources d'incertitude est menée respectivement grâce à des simulations Monte Carlo, des simulations multi-modèles et des comparaisons entre schémas numériques. Une incertitude élevée est mise en évidence, pour les concentrations d'ozone. Pour dépasser les limitations dues à l'incertitude, une stratégie réside dans la prévision d'ensemble. En combinant plusieurs modèles (jusqu'à quarante-huit modèles) sur la bases des observations passées, les prévisions peuvent être significativement améliorées. Ce travail a aussi été l'occasion de développer un système de modélisation innovant, Polyphemus.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Alrammal, Muath. "Algorithms for XML stream processing : massive data, external memory and scalable performance". Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00779309.

Texto completo
Resumen
Plusieurs applications modernes nécessitent un traitement de flux massifs de données XML, cela crée de défis techniques. Parmi ces derniers, il y a la conception et la mise en ouvre d'outils pour optimiser le traitement des requêtes XPath et fournir une estimation précise des coûts de ces requêtes traitées sur un flux massif de données XML. Dans cette thèse, nous proposons un nouveau modèle de prédiction de performance qui estime a priori le coût (en termes d'espace utilisé et de temps écoulé) pour les requêtes structurelles de Forward XPath. Ce faisant, nous réalisons une étude expérimentale pour confirmer la relation linéaire entre le traitement de flux, et les ressources d'accès aux données. Par conséquent, nous présentons un modèle mathématique (fonctions de régression linéaire) pour prévoir le coût d'une requête XPath donnée. En outre, nous présentons une technique nouvelle d'estimation de sélectivité. Elle se compose de deux éléments. Le premier est le résumé path tree: une présentation concise et précise de la structure d'un document XML. Le second est l'algorithme d'estimation de sélectivité: un algorithme efficace de flux pour traverser le synopsis path tree pour estimer les valeurs des paramètres de coût. Ces paramètres sont utilisés par le modèle mathématique pour déterminer le coût d'une requête XPath donnée. Nous comparons les performances de notre modèle avec les approches existantes. De plus, nous présentons un cas d'utilisation d'un système en ligne appelé "online stream-querying system". Le système utilise notre modèle de prédiction de performance pour estimer le coût (en termes de temps / mémoire) d'une requête XPath donnée. En outre, il fournit une réponse précise à l'auteur de la requête. Ce cas d'utilisation illustre les avantages pratiques de gestion de performance avec nos techniques
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Schmidt, Aurora C. "Scalable Sensor Network Field Reconstruction with Robust Basis Pursuit". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/240.

Texto completo
Resumen
We study a scalable approach to information fusion for large sensor networks. The algorithm, field inversion by consensus and compressed sensing (FICCS), is a distributed method for detection, localization, and estimation of a propagating field generated by an unknown number of point sources. The approach combines results in the areas of distributed average consensus and compressed sensing to form low dimensional linear projections of all sensor readings throughout the network, allowing each node to reconstruct a global estimate of the field. Compressed sensing is applied to continuous source localization by quantizing the potential locations of sources, transforming the model of sensor observations to a finite discretized linear model. We study the effects of structured modeling errors induced by spatial quantization and the robustness of ℓ1 penalty methods for field inversion. We develop a perturbations method to analyze the effects of spatial quantization error in compressed sensing and provide a model-robust version of noise-aware basis pursuit with an upperbound on the sparse reconstruction error. Numerical simulations illustrate system design considerations by measuring the performance of decentralized field reconstruction, detection performance of point phenomena, comparing trade-offs of quantization parameters, and studying various sparse estimators. The method is extended to time-varying systems using a recursive sparse estimator that incorporates priors into ℓ1 penalized least squares. This thesis presents the advantages of inter-sensor measurement mixing as a means of efficiently spreading information throughout a network, while identifying sparse estimation as an enabling technology for scalable distributed field reconstruction systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Soh, Jeremy. "A scalable, portable, FPGA-based implementation of the Unscented Kalman Filter". Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17286.

Texto completo
Resumen
Sustained technological progress has come to a point where robotic/autonomous systems may well soon become ubiquitous. In order for these systems to actually be useful, an increase in autonomous capability is necessary for aerospace, as well as other, applications. Greater aerospace autonomous capability means there is a need for high performance state estimation. However, the desire to reduce costs through simplified development processes and compact form factors can limit performance. A hardware-based approach, such as using a Field Programmable Gate Array (FPGA), is common when high performance is required, but hardware approaches tend to have a more complicated development process when compared to traditional software approaches; greater development complexity, in turn, results in higher costs. Leveraging the advantages of both hardware-based and software-based approaches, a hardware/software (HW/SW) codesign of the Unscented Kalman Filter (UKF), based on an FPGA, is presented. The UKF is split into an application-specific part, implemented in software to retain portability, and a non-application-specific part, implemented in hardware as a parameterisable IP core to increase performance. The codesign is split into three versions (Serial, Parallel and Pipeline) to provide flexibility when choosing the balance between resources and performance, allowing system designers to simplify the development process. Simulation results demonstrating two possible implementations of the design, a nanosatellite application and a Simultaneous Localisation and Mapping (SLAM) application, are presented. These results validate the performance of the HW/SW UKF and demonstrate its portability, particularly in small aerospace systems. Implementation (synthesis, timing, power) details for a variety of situations are presented and analysed to demonstrate how the HW/SW codesign can be scaled for any application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Raghavan, Venkatesh. "VAMANA -- A high performance, scalable and cost driven XPath engine". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0505104-185545/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Werner, Stéphane. "Optimisation des cadastres d'émissions: estimation des incertitudes, détermination des facteurs d'émissions du "black carbon" issus du trafic routier et estimation de l'influence de l'incertitude des cadastres d'émissions sur la modélisation : application aux cadastres Escompte et Nord-Pas-de-Calais". Strasbourg, 2009. https://publication-theses.unistra.fr/public/theses_doctorat/2009/WERNER_Stephane_2009.pdf.

Texto completo
Resumen
Les cadastres d’émissions ont un rôle fondamental dans le contrôle de la pollution atmosphérique, à la fois directement, pour le recensement des émissions et comme données d’entrée pour les modèles de pollution atmosphérique. Le travail de cette thèse a eu pour principal objectif l’optimisation de cadastres existants, notamment celui du programme ESCOMPTE « Expériences sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions ». Pour ce cadastre, deux problématiques distinctes ont été développées : l’une visant à évaluer au mieux l’incertitude portant sur les émissions et l’autre à insérer un nouveau composé d’intérêt : le Black Carbon (BC). Dans le cadre de la première problématique portant sur les incertitudes, une étude supplémentaire a été effectuée sur le cadastre de la région Nord-Pas-de-Calais. La méthode de calcul des incertitudes a ainsi pu être testée sur un autre cadastre. Ces incertitudes sur les émissions ont permis d'évaluer leur influence sur la modélisation de la qualité de l’air (modèle CHIMERE). La seconde partie du travail de recherche a été entreprise afin de compléter les cadastres d’émissions de particules carbonées existants, pour le secteur des transports routiers, en introduisant une classe de composés supplémentaires : le BC. Le BC est la matière carbonée des particules atmosphériques absorbant la lumière. Il provient essentiellement de la combustion incomplète de combustibles ou composés carbonés. Il peut être considéré comme un composé primordial pour l’atmosphère du fait de son impact sur le climat et pour la santé du fait de sa réactivité chimique
Emissions inventories have a fundamental role in controlling air pollution, both directly by identifying emissions, and as input data for air pollution models. The main objective of this PhD study is to optimize existing emissions inventories, including one from the program ESCOMPTE « Experiments on Site to Constrain Models of Atmospheric Pollution and Transport of Emissions ». For that emissions inventory, two separate issues were developed: one designed to better assess the emissions uncertainties and the second to insert a new compound of interest in this inventory: Black Carbon (BC). Within the first issue, an additional study was conducted on the Nord-Pas-de-Calais emissions inventory to test the methodology of uncertainties calculation. The emissions uncertainties calculated were used to assess their influence on air quality modeling (model CHIMERE). The second part of the research study was dedicated to complement the existing inventory of carbon particulate emissions from road traffic sector by introducing an additional class of compounds: the BC. The BC is the raw carbonaceous atmospheric particles absorbing light. Its main source is the incomplete combustion of carbonaceous fuels and compounds. It can be regarded as a key atmospheric compound given its impact on climate and on health because of its chemical reactivity
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Brunner, Manuela. "Hydrogrammes synthétiques par bassin et types d'événements. Estimation, caractérisation, régionalisation et incertitude". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAU003/document.

Texto completo
Resumen
L'estimation de crues de projet est requise pour le dimensionnement de barrages et de bassins de rétention, de même que pour la gestion des inondations lors de l’élaboration de cartes d’aléas ou lors de la modélisation et délimitation de plaines d’inondation. Généralement, les crues de projet sont définies par leur débit de pointe à partir d’une analyse fréquentielle univariée. Cependant, lorsque le dimensionnement d’ouvrages hydrauliques ou la gestion de crues nécessitent un stockage du volume ruisselé, il est également nécessaire de connaître les caractéristiques volume, durée et forme de l’hydrogramme de crue en plus de son débit maximum. Une analyse fréquentielle bivariée permet une estimation conjointe du débit de pointe et du volume de l’hydrogramme en tenant compte de leur corrélation. Bien qu’une telle approche permette la détermination du couple débit/volume de crue, il manque l’information relative à la forme de l’hydrogramme de crue. Une approche attrayante pour caractériser la forme de la crue de projet est de définir un hydrogramme représentatif normalisé par une densité de probabilité. La combinaison d’une densité de probabilité et des quantiles bivariés débit/volume permet la construction d’un hydrogramme synthétique de crue pour une période de retour donnée, qui modélise le pic d’une crue ainsi que sa forme. De tels hydrogrammes synthétiques sont potentiellement utiles et simples d’utilisation pour la détermination de crues de projet. Cependant, ils possèdent actuellement plusieurs limitations. Premièrement, ils reposent sur la définition d’une période de retour bivariée qui n’est pas univoque. Deuxièmement, ils décrivent en général le comportement spécifique d’un bassin versant en ne tenant pas compte de la variabilité des processus représentée par différents types de crues. Troisièmement, les hydrogrammes synthétiques ne sont pas disponibles pour les bassins versant non jaugés et une estimation de leurs incertitudes n’est pas calculée.Pour remédier à ces manquements, cette thèse propose des avenues pour la construction d’hydrogrammes synthétiques de projet pour les bassins versants jaugés et non jaugés, de même que pour la prise en compte de la diversité des types de crue. Des méthodes sont également développées pour la construction d’hydrogrammes synthétiques de crue spécifiques au bassin et aux événements ainsi que pour la régionalisation des hydrogrammes. Une estimation des diverses sources d’incertitude est également proposée. Ces travaux de recherche montrent que les hydrogrammes synthétiques de projet constituent une approche qui s’adapte bien à la représentation de différents types de crue ou d’événements dans un contexte de détermination de crues de projet. Une comparaison de différentes méthodes de régionalisation montre que les hydrogrammes synthétiques de projet spécifiques au bassin peuvent être régionalisés à des bassins non jaugés à l’aide de méthodes de régression linéaires et non linéaires. Il est également montré que les hydrogrammes de projet spécifiques aux événements peuvent être régionalisés à l’aide d’une approche d’indice de crue bivariée. Dans ce contexte, une représentation fonctionnelle de la forme des hydrogrammes constitue un moyen judicieux pour la délimitation de régions ayant un comportement hydrologique de crue similaire en terme de réactivité. Une analyse de l’incertitude a montré que la longueur de la série de mesures et le choix de la stratégie d’échantillonnage constituent les principales sources d’incertitude dans la construction d’hydrogrammes synthétiques de projet. Cette thèse démontre qu’une approche de crues de projet basée sur un ensemble de crues permet la prise en compte des différents types de crue et de divers processus. Ces travaux permettent de passer de l’analyse fréquentielle statistique de crues vers l’analyse fréquentielle hydrologique de crues permettant de prendre en compte les processus et conduisant à une prise de décision plus éclairée
Design flood estimates are needed in hydraulic design for the construction of dams and retention basins and in flood management for drawing hazard maps or modeling inundation areas. Traditionally, such design floods have been expressed in terms of peak discharge estimated in a univariate flood frequency analysis. However, design or flood management tasks involving storage, in addition to peak discharge, also require information on hydrograph volume, duration, and shape . A bivariate flood frequency analysis allows the joint estimation of peak discharge and hydrograph volume and the consideration of their dependence. While such bivariate design quantiles describe the magnitude of a design flood, they lack information on its shape. An attractive way of modeling the whole shape of a design flood is to express a representative normalized hydrograph shape as a probability density function. The combination of such a probability density function with bivariate design quantiles allows the construction of a synthetic design hydrograph for a certain return period which describes the magnitude of a flood along with its shape. Such synthetic design hydrographs have the potential to be a useful and simple tool in design flood estimation. However, they currently have some limitations. First, they rely on the definition of a bivariate return period which is not uniquely defined. Second, they usually describe the specific behavior of a catchment and do not express process variability represented by different flood types. Third, they are neither available for ungauged catchments nor are they usually provided together with an uncertainty estimate.This thesis therefore explores possibilities for the construction of synthetic design hydrographs in gauged and ungauged catchments and ways of representing process variability in design flood construction. It proposes tools for both catchment- and flood-type specific design hydrograph construction and regionalization and for the assessment of their uncertainty.The thesis shows that synthetic design hydrographs are a flexible tool allowing for the consideration of different flood or event types in design flood estimation. A comparison of different regionalization methods, including spatial, similarity, and proximity based approaches, showed that catchment-specific design hydrographs can be best regionalized to ungauged catchments using linear and nonlinear regression methods. It was further shown that event-type specific design hydrograph sets can be regionalized using a bivariate index flood approach. In such a setting, a functional representation of hydrograph shapes was found to be a useful tool for the delineation of regions with similar flood reactivities.An uncertainty assessment showed that the record length and the choice of the sampling strategy are major uncertainty sources in the construction of synthetic design hydrographs and that this uncertainty propagates through the regionalization process.This thesis highlights that an ensemble-based design flood approach allows for the consideration of different flood types and runoff processes. This is a step from flood frequency statistics to flood frequency hydrology which allows better-informed decision making
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Biletska, Krystyna. "Estimation en temps réel des flux origines-destinations dans un carrefour à feux par fusion de données multicapteurs". Compiègne, 2010. http://www.theses.fr/2010COMP1893.

Texto completo
Resumen
La qualité de l’information concernant les origines et les destinations (OD) des véhicules dans un carrefour influence la performance de nombreux systèmes des transports routiers. La période de sa mise à jour conditionne l’échelle temporelle de fonctionnement de ces systèmes. Nous nous intéressons au problème de reconstitution à chaque cycle de feux des OD des véhicules traversant un carrefour à partir des états des feux et des mesures de trafic provenant des capteurs vidéo. Les mesures de trafic, fournies à la seconde, sont les débits aux entrées/sorties du carrefour et les nombres de véhicules en arrêt dans les zones internes du carrefour. Ces données réelles sont entachées d’imperfections. La seule méthode existante qui est capable de résoudre ce problème, nommée ORIDI, ne tient pas compte de l’imperfection des données. Nous proposons une nouvelle méthode modélisant l’imprécision de données par la théorie des sous-ensembles flous. Elle est applicable à tout type de carrefour et indépendante du type de stratégie de feux. Elle consiste à estimer les flux OD à partir de la loi de conservation des véhicules représentée par un système d’équation sous déterminé construit de façon dynamique à chaque cycle de feux grâce aux réseaux de Petri a-temporisés flous. Une solution unique est trouvée grâce à huit différentes méthodes qui représentent l’estimation sous forme d’un point, d’un intervalle ou d’un sous-ensemble flou. Notre étude montre que les méthodes nettes sont aussi précises qu’ORIDI, mais plus robustes face à une panne d’un des capteurs vidéo. Les méthodes ensemblistes et floues, étant moins précises qu’ORIDI, cherchent à garantir que la solution inclut la vraie valeur
The quality of the information about origins and destinations (OD) of vehicles in a junction influences the performance of many road transport systems. The period of its update determines the temporal scale of working of these systems. We are interested in the problem of reconstituting of the OD of vehicles crossing a junction, at each traffic light cycle, using the traffic light states and traffic measurements from video sensors. Traffic measurements, provided every second, are the vehicle counts made on each entrance and exit of the junction and the number of vehicles stopped at each inner section of the junction. Thses real date are subject to imperfections. The only existent method, named ORIDI, which is capable of resolving this problem doesn’t take into account the data imperfection. We propose a new method modelling the date imprecision by the theory of fuzzy subsets. It can be applied to any type of junction and is independent of the type of traffic light strategy. The method estimates OD flows from the vehicle conservation law represented by an underdetermined system of equations constructed in a dynamic way at each traffic light cycle using to the fuzzy a-timed Petri nets. A unique solution is found thanks to eight different methods which introduce estimate in the form of point, interval or fuzzy set. Our study shows that the crisp methods are accurate like ORIDI, but more robust when one of the video sensors is broken down. The interval and fuzzy methods, being less accurate than ORIDI, try to guarantee that the solution includes the true value
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Fissore, Giancarlo. "Generative modeling : statistical physics of Restricted Boltzmann Machines, learning with missing information and scalable training of Linear Flows". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG028.

Texto completo
Resumen
Les modèles de réseaux neuronaux capables d'approximer et d'échantillonner des distributions de probabilité à haute dimension sont connus sous le nom de modèles génératifs. Ces dernières années, cette classe de modèles a fait l'objet d'une attention particulière en raison de son potentiel à apprendre automatiquement des représentations significatives de la grande quantité de données que nous produisons et consommons quotidiennement. Cette thèse présente des résultats théoriques et algorithmiques relatifs aux modèles génératifs et elle est divisée en deux parties. Dans la première partie, nous concentrons notre attention sur la Machine de Boltzmann Restreinte (RBM) et sa formulation en physique statistique. Historiquement, la physique statistique a joué un rôle central dans l'étude des fondements théoriques et dans le développement de modèles de réseaux neuronaux. La première implémentation neuronale d'une mémoire associative (Hopfield, 1982) est un travail séminal dans ce contexte. La RBM peut être considérée comme un développement du modèle de Hopfield, et elle est particulièrement intéressante en raison de son rôle à l'avant-garde de la révolution de l'apprentissage profond (Hinton et al. 2006). En exploitant sa formulation de physique statistique, nous dérivons une théorie de champ moyen de la RBM qui nous permet de caractériser à la fois son fonctionnement en tant que modèle génératif et la dynamique de sa procédure d'apprentissage. Cette analyse s'avère utile pour dériver une stratégie d'imputation robuste de type champ moyen qui permet d'utiliser la RBM pour apprendre des distributions empiriques dans le cas difficile où l'ensemble de données à modéliser n'est que partiellement observé et présente des pourcentages élevés d'informations manquantes. Dans la deuxième partie, nous considérons une classe de modèles génératifs connus sous le nom de Normalizing Flows (NF), dont la caractéristique distinctive est la capacité de modéliser des distributions complexes à haute dimension en employant des transformations inversibles d'une distribution simple et traitable. L'inversibilité de la transformation permet d'exprimer la densité de probabilité par un changement de variables dont l'optimisation par Maximum de Vraisemblance (ML) est assez simple mais coûteuse en calcul. La pratique courante est d'imposer des contraintes architecturales sur la classe de transformations utilisées pour les NF, afin de rendre l'optimisation par ML efficace. En partant de considérations géométriques, nous proposons un algorithme d'optimisation stochastique par descente de gradient qui exploite la structure matricielle des réseaux de neurones entièrement connectés sans imposer de contraintes sur leur structure autre que la dimensionnalité fixe requise par l'inversibilité. Cet algorithme est efficace en termes de calcul et peut s'adapter à des ensembles de données de très haute dimension. Nous démontrons son efficacité dans l'apprentissage d'une architecture non linéaire multicouche utilisant des couches entièrement connectées
Neural network models able to approximate and sample high-dimensional probability distributions are known as generative models. In recent years this class of models has received tremendous attention due to their potential in automatically learning meaningful representations of the vast amount of data that we produce and consume daily. This thesis presents theoretical and algorithmic results pertaining to generative models and it is divided in two parts. In the first part, we focus our attention on the Restricted Boltzmann Machine (RBM) and its statistical physics formulation. Historically, statistical physics has played a central role in studying the theoretical foundations and providing inspiration for neural network models. The first neural implementation of an associative memory (Hopfield, 1982) is a seminal work in this context. The RBM can be regarded to as a development of the Hopfield model, and it is of particular interest due to its role at the forefront of the deep learning revolution (Hinton et al. 2006).Exploiting its statistical physics formulation, we derive a mean-field theory of the RBM that let us characterize both its functioning as a generative model and the dynamics of its training procedure. This analysis proves useful in deriving a robust mean-field imputation strategy that makes it possible to use the RBM to learn empirical distributions in the challenging case in which the dataset to model is only partially observed and presents high percentages of missing information. In the second part we consider a class of generative models known as Normalizing Flows (NF), whose distinguishing feature is the ability to model complex high-dimensional distributions by employing invertible transformations of a simple tractable distribution. The invertibility of the transformation allows to express the probability density through a change of variables whose optimization by Maximum Likelihood (ML) is rather straightforward but computationally expensive. The common practice is to impose architectural constraints on the class of transformations used for NF, in order to make the ML optimization efficient. Proceeding from geometrical considerations, we propose a stochastic gradient descent optimization algorithm that exploits the matrix structure of fully connected neural networks without imposing any constraints on their structure other then the fixed dimensionality required by invertibility. This algorithm is computationally efficient and can scale to very high dimensional datasets. We demonstrate its effectiveness in training a multylayer nonlinear architecture employing fully connected layers
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Tamssaouet, Ferhat. "Towards system-level prognostics : modeling, uncertainty propagation and system remaining useful life prediction". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0079.

Texto completo
Resumen
Le pronostic est le processus de prédiction de la durée de vie résiduelle utile (RUL) des composants, sous-systèmes ou systèmes. Cependant, jusqu'à présent, le pronostic a souvent été abordé au niveau composant sans tenir compte des interactions entre les composants et l'impact de l'environnement, ce qui peut conduire à une mauvaise prédiction du temps de défaillance dans des systèmes complexes. Dans ce travail, une approche de pronostic au niveau du système est proposée. Cette approche est basée sur un nouveau cadre de modélisation : le modèle d'inopérabilité entrée-sortie (IIM), qui permet de prendre en compte les interactions entre les composants et les effets du profil de mission et peut être appliqué pour des systèmes hétérogènes. Ensuite, une nouvelle méthodologie en ligne pour l'estimation des paramètres (basée sur l'algorithme de la descente du gradient) et la prédiction du RUL au niveau système (SRUL) en utilisant les filtres particulaires (PF), a été proposée. En détail, l'état de santé des composants du système est estimé et prédit d'une manière probabiliste en utilisant les PF. En cas de divergence consécutive entre les estimations a priori et a posteriori de l'état de santé du système, la méthode d'estimation proposée est utilisée pour corriger et adapter les paramètres de l'IIM. Finalement, la méthodologie développée, a été appliquée sur un système industriel réaliste : le Tennessee Eastman Process, et a permis une prédiction du SRUL dans un temps de calcul raisonnable
Prognostics is the process of predicting the remaining useful life (RUL) of components, subsystems, or systems. However, until now, the prognostics has often been approached from a component view without considering interactions between components and effects of the environment, leading to a misprediction of the complex systems failure time. In this work, a prognostics approach to system-level is proposed. This approach is based on a new modeling framework: the inoperability input-output model (IIM), which allows tackling the issue related to the interactions between components and the mission profile effects and can be applied for heterogeneous systems. Then, a new methodology for online joint system RUL (SRUL) prediction and model parameter estimation is developed based on particle filtering (PF) and gradient descent (GD). In detail, the state of health of system components is estimated and predicted in a probabilistic manner using PF. In the case of consecutive discrepancy between the prior and posterior estimates of the system health state, the proposed estimation method is used to correct and to adapt the IIM parameters. Finally, the developed methodology is verified on a realistic industrial system: The Tennessee Eastman Process. The obtained results highlighted its effectiveness in predicting the SRUL in reasonable computing time
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Zhang, Hongwei. "Dependable messaging in wireless sensor networks". Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155607973.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Friedman, Timur. "Scalable estimation of multicast session characteristics". 2002. https://scholarworks.umass.edu/dissertations/AAI3056225.

Texto completo
Resumen
So that Internet multicast sessions can reach thousands, or even millions, of participants, rather than the tens they reach today, we need new techniques for estimating important session management parameters. These parameters include the number of participants in a multicast session, the topology of the multicast tree, and the loss rates along links in the tree. Such parameters can be used by adaptive algorithms for improved data transmission. For example, forward error correction for lost multicast packets can be tuned to the number of participants, and participants can use topology and loss rate information to help identify fellow participants that are well situated to retransmit multicast packets that have been lost and cannot be reconstituted. New techniques are needed in order to avoid the feedback implosion that results from querying all participants. For example, in estimating the session size, one approach is to count all participants individually. For large session sizes, a more scalable approach is to poll to obtain responses from some fraction of the participants and then estimate the session size based upon the sample of responses received. Likewise, in traces of data packet receipts and losses to estimate topology and loss rates, collecting all available traces from all participants can consume more bandwidth than is used by the original data packets. A more scalable approach is to thin the traces, or to select only a subset of the traces and limit one's estimation goal to identifying only the lossiest links in the multicast tree. The bandwidth savings from restricting feedback comes at a cost in estimation quality. Our interest is in devising techniques that can deliver a satisfactory trade-off for envisioned application requirements. An important part of this task is to characterize the trade-off in precise terms that will allow an application to intelligently choose its own operating point. We propose scaling solutions to three multicast session parameter estimation problems. These are the estimation of session size, of multicast distribution tree topology, and of the location of the lossiest links in a tree.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Chou, Kao-Peng y 周高鵬. "Disintegrated Channel Estimation in Scalable Filter-and-Forward Relay Networks". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/52ub2d.

Texto completo
Resumen
博士
國立中央大學
通訊工程學系
105
Cooperative communication, which has attracted the attention of researchers in recent years, enables the efficient use of resources in mobile communication systems. The research of cooperative communication begin with relay generated multi-link transmission. From the simplest amplify-and-forward to the most complicated decode-and-forward, relay serves a role of extending the coverage ratio for wireless signal in a practical manner. Deploying a single relay or series connected relays is popular because of its simplicity. Conversely, employing parallel relays and space time coding is referred to as distributed space time coding (D-STC) can obtain the advantage of spatial diversity. In this research a disintegrated channel estimation technique is proposed to accomplish the spatial diversity that is supported by cooperative relays. The relaying strategy that is considered in this research is a filter-and-forward (FF) relaying method with superimposed training sequences to estimate backhaul and access channels separately. To reduce inter-relay interference, a generalized filtering technique is proposed and investigated. Unlike the interference suppression method that is commonly employed in conventional FF relay networks, a generalized filter multiplexes the superimposed training sequences from different relays to the destination by time-division multiplexing (TDM), frequency-division multiplexing (FDM) and code-division multiplexing (CDM) methods. The theoretical mean square errors (MSEs) of disintegrated channel estimation is derived and match to the simulation results. The Bayesian Cramer-Rao lower bounds (BCRBs) are derived as the estimation performance benchmark. The improvements offered by the proposed technique are verified by comprehensive computer simulation in conjunction with calculations of the derived BCRBs and the MSEs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hsu, Mei-Yun y 許美雲. "Scalable Module-Based Architecture for MPEG-4 BMA Motion Estimation". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/28766556141470208247.

Texto completo
Resumen
碩士
國立臺灣大學
電機工程學研究所
88
In this paper, we present a scalable module-based architecture for block matching motion estimation algorithm of MPEG-4. The basic module is one set of processing elements based on one-dimensional systolic array architecture. To support various applications, different modules of processing elements can be configured to form the processing element array to meet the requirements, such as variable block size, search range and computation power. And this proposed architecture has the advantage of few I/O port counts. Based on eliminating unnecessary signal transitions in the processing element, power dissipation of datapath can be reduced to about half without decreasing the picture quality. In addition, for data-dominant video applications like motion estimation, the power consumption is also influenced significantly by the architecture of memory. In order to reduce the power consumption due to the memory accesses of external memory, we propose four schemes of memory hierarchy according to different levels of data reuse. The evaluations of all schemes are parameterized, and designers can easily derive a better scheme under reasonable hardware resources and power consumption. Considering of system integration, the influence of I/O bandwidth between motion estimation unit and system bus is also discussed in the paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lee, Chien-tai y 李建泰. "Utilize bandwidth estimation and scalable video coding to improve IPTV performance". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/19772935081286056642.

Texto completo
Resumen
碩士
國立臺灣科技大學
電機工程系
100
With the advance of multimedia technologies and Internet prevalence, multimedia has become one of major information communication tools. To provide network multimedia services, both Internet Protocal TV (IPTV) and Peer-to-Peer (P2P) network control are the most important technologies. The Internet multimedia has evolved from voice communications to high definition (HD) video communications. The Open IPTV Forum, OIPF, is aiming to propose the IPTV system standard, which proclaims the potential of IPTV-related technologies. In this thesis, we proposed to adjust and refine the IPTV system, which comprises media codec systems, streaming control and H.264 encoder and rate control units, to satisfy the application requirement of Internet multimedia. The Content Delivery Network (CDN) and P2P networks are integrated to provide the P2P-IPTV service. For live video streaming, we proposed to utilize both scalable video rate control (SVRC) and network traffic monitor (NTM) for better services, in which the latter helps to provide feedback control that takes peer bandwidth capacity, network connection information, and delay parameters, to dynamically adjust the bit-rate of the video encoder. The bandwith estimation method is developed to solve the bottleneck of insufficient bandwidth in a sharing network environment. To improve the reliability of video transmission quality, the SVRC module and NTM method are designed to operate under the best bandwith utilization of IPTV system. Compared to previous researches, our experiments show that the proposed bandwith estimation as a feedback control for IPTV control can effectively reduce the transmission delay, and improve the stability of transmitted video quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Li, Yao y 李曜. "A Quality Scalable H.264/AVC Fractional Motion Estimation IP Design and Implementation". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/89804086435993035420.

Texto completo
Resumen
碩士
國立中正大學
資訊工程所
96
This thesis presents a quality scalable fractional motion estimation (QS-FME) IP design for H.264/AVC video coding application. The proposed design is based on the algorithm of QS-FME which supports 3 modes, including full mode, reduced mode and single mode, with different computational complexity. Compare to full mode, the single mode can reduce 90% computation complexity and suite for portable devices. Full mode can achieve average PSNR drop in 0.007 dB. In some cases, the full mode even has better compression quality than the algorithm in JM9.3. About the reduced mode, it can achieve real-time encoding of HD720 sequences. In order to enhance the application of portable devices, we also developed a cost-down customized QS-FME, named Light QS-FME. According to CCU 0.13um CMOS technology, the proposed design of QS-FME and Light QS-FME costs 180232 and 59394 gates as well as 27.264 Kbits/87.936 Kbits local memory for search ranges [-16, +15] and [-40, +39] respectively. The maximum operation frequencies are all 150 MHz and can achieve real-time motion estimation on QCIF, CIF, SDTV (720 x 480), and HD720 (1280 x 720) video sequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Chien, Wen-Hsien y 錢文賢. "Design and Implementation Scalable of VLSI Architecture for Variable Block Size Motion Estimation". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/11917898479588814560.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Tsao, Ko-Chia y 曹克嘉. "Motion estimation design for H.264/MPEG4-AVC video coding and its scalable extension". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/15411133728236860335.

Texto completo
Resumen
碩士
國立交通大學
電子研究所
100
Motion estimation is (ME) is the most complex part and the bottle neck of a real time video encoder. The adoption of inter-layer prediction (IL prediction) in H.264/AVC SVC extension even increases the computing time and memory bandwidth of ME. Thus, we adopted the previous data efficient inter-layer prediction algorithm [4] to save the memory bandwidth. In this thesis, we propose the corresponding hardware architecture for inter-layer prediction which can process INTER mode and different inter-layer prediction modes in parallel to save the computing time and memory bandwidth. Furthermore, in order to reduce the high complexity and computation of FME, we adopt the Single-Pass Fractional Motion Estimation (SPFME) as our fast FME algorithm in our FME process. We then propose the corresponding FME hardware architecture for SPFME according to the previous architecture of FME design [3]. Compared with the previous architecture, our proposed architecture can speed up to four times faster. There are many prediction modes due to the adoption of inter-layer prediction and different block types. Thus, to further reduce the complexity and computing time of FME, we adopt the pre-selection algorithm of Li’s to eliminate some prediction modes from FME process. However, the Parallel Multi-Resolution Motion Estimation (PMRME) algorithm [1] is adopted in our IME process. Hence, we further propose a multi-level mode filtering scheme to select 3 prediction modes from 3 different search levels. Finally, we integrate the adopted IL prediction, mode filtering, and the SPFME algorithm. The simulation results shows that the proposed function flow with mode filtering can achieve average 3.542% of bit-rate increment and 0.106dB of PSNR degradation in CIF sequence for 2 spatial layers. The implementation results of the whole ME architecture is also shown. It can support CIF+480p+1080p video @60 fps under 135MHz.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Wang, Te-Heng y 王特亨. "Fast Priority-Based Mode Decision and Activity-Based Motion Estimation for Scalable Video Coding". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/98022885907305615825.

Texto completo
Resumen
碩士
國立東華大學
電子工程研究所
98
In H.264/AVC scalable video coding (SVC), the multi-layer motion estimation between different layers achieves spatial scalability but accompanies with high coding complexity. To accelerate the encoding time in SVC, a priority-based mode decision and an activity-based motion estimation are proposed in this thesis. In the proposed mode decision, the rate-distortion costs of the base layer are used to decide the priority of the mode in enhancement layer. The activity-based motion estimation employs motion vector difference in base layer to decrease search range. We also propose a computation-scalable algorithm to provide the quality scalability according to the allocated computation power. Through the proposed algorithms, the computation complexity come from the enhancement layer could be efficiently reduced. Compared with JSVM, the experimental results demonstrate that 72% to 81% time saving is achieved with negligible 0.05 PSNR decrease and only 1.9% bitrate increase.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Wang, Xiaoming. "Robust and Scalable Sampling Algorithms for Network Measurement". 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-08-2391.

Texto completo
Resumen
Recent growth of the Internet in both scale and complexity has imposed a number of difficult challenges on existing measurement techniques and approaches, which are essential for both network management and many ongoing research projects. For any measurement algorithm, achieving both accuracy and scalability is very challenging given hard resource constraints (e.g., bandwidth, delay, physical memory, and CPU speed). My dissertation research tackles this problem by first proposing a novel mechanism called residual sampling, which intentionally introduces a predetermined amount of bias into the measurement process. We show that such biased sampling can be extremely scalable; moreover, we develop residual estimation algorithms that can unbiasedly recover the original information from the sampled data. Utilizing these results, we further develop two versions of the residual sampling mechanism: a continuous version for characterizing the user lifetime distribution in large-scale peer-to-peer networks and a discrete version for monitoring flow statistics (including per-flow counts and the flow size distribution) in high-speed Internet routers. For the former application in P2P networks, this work presents two methods: ResIDual-based Estimator (RIDE), which takes single-point snapshots of the system and assumes systems with stationary arrivals, and Uniform RIDE (U-RIDE), which takes multiple snapshots and adapts to systems with arbitrary (including non-stationary) arrival processes. For the latter application in traffic monitoring, we introduce Discrete RIDE (D-RIDE), which allows one to sample each flow with a geometric random variable. Our numerous simulations and experiments with P2P networks and real Internet traces confirm that these algorithms are able to make accurate estimation about the monitored metrics and simultaneously meet the requirements of hard resource constraints. These results show that residual sampling indeed provides an ideal solution to balancing between accuracy and scalability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Mir, arabbaygi Siavash. "Novel scalable approaches for multiple sequence alignment and phylogenomic reconstruction". Thesis, 2015. http://hdl.handle.net/2152/31377.

Texto completo
Resumen
The amount of biological sequence data is increasing rapidly, a promising development that would transform biology if we can develop methods that can analyze large-scale data efficiently and accurately. A fundamental question in evolutionary biology is building the tree of life: a reconstruction of relationships between organisms in evolutionary time. Reconstructing phylogenetic trees from molecular data is an optimization problem that involves many steps. In this dissertation, we argue that to answer long-standing phylogenetic questions with large-scale data, several challenges need to be addressed in various steps of the pipeline. One challenges is aligning large number of sequences so that evolutionarily related positions in all sequences are put in the same column. Constructing alignments is necessary for phylogenetic reconstruction, but also for many other types of evolutionary analyses. In response to this challenge, we introduce PASTA, a scalable and accurate algorithm that can align datasets with up to a million sequences. A second challenge is related to the interesting fact that various parts of the genome can have different evolutionary histories. Reconstructing a species tree from genome-scale data needs to account for these differences. A main approach for species tree reconstruction is to first reconstruct a set of ``gene trees'' from different parts of the genome, and to then summarize these gene trees into a single species tree. We argue that this approach can suffer from two challenges: reconstruction of individual gene trees from limited data can be plagued by estimation error, which translates to errors in the species tree, and also, methods that summarize gene trees are not scalable or accurate enough under some conditions. To address the first challenge, we introduce statistical binning, a method that re-estimates gene trees by grouping them into bins. We show that binning improves gene tree accuracy, and consequently the species tree accuracy. To address the second challenge, we introduce ASTRAL, a new summary method that can run on a thousand genes and a thousand species in a day and has outstanding accuracy. We show that the development of these methods has enabled biological analyses that were otherwise not possible.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Syu, Jhe-wei y 許哲維. "Fast Inter-Layer Motion Estimation Algorithm on Spatial Scalability in H.264/AVC Scalable Extension". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/52s9c2.

Texto completo
Resumen
碩士
國立中央大學
通訊工程研究所
97
With the improvements of video coding technology, network infrastructures, storage capacity, and CPU computing capability, the applications of multimedia systems become wider and more popular. Therefore, how to efficiently provide video sequences to users under different constraints is very important, and scalable video coding is one of the best solutions to this problem. H.264 scalable extension (SVC) that is constructed based on H.264/AVC is the most recent scalable video coding standard. SVC utilizes the inter-layer prediction to substantially improve the coding efficiency comparing with the prior scalable video coding standards. Nevertheless, this technique results in extremely large computation complexity which obstructs it from practical use. Especially on spatial scalability, the complexity of the enhancement layer motion estimation occupies above 90% of the total complexity. The main objective of this work is to reduce the computation complexity while maintaining both the video quality and the bit-rate. This thesis proposes a fast inter-layer motion estimation algorithm on temporal and spatial scalabilities for SVC. We utilize the relation between two motion vector predictors from the base layer as well as the enhancement layer respectively and the correlation between all the modes to reduce the number of search times. The simulation results show that the proposed algorithm can save the computation complexity up to 67.4% compared to JSVM9.12 with less than 0.0476dB video quality degradation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Veerapandian, Lakshmi. "A spatial scalable video coding with selective data transmission using wavelet decomposition". Thesis, 2010. http://hdl.handle.net/2440/61956.

Texto completo
Resumen
In this research a scalable video coding framework is proposed, mainly focusing on spatial scalability, and a subjective data compression algorithm based on: (1) quality, (2) resolution (target output device), and (3) bandwidth. This framework enables the scalable delivery of video based on the output display resolution, and through a congested network or limited bandwidth with an acceptable visual quality. In order to achieve this scalable framework we have used wavelets, for greater flexibility, and a multiresolution approach. The multiresolution motion estimation (MRME) provides the reusability of motion vectors across different resolution levels. In MRME the motion estimation, which is carried out in the wavelet domain, is initially performed in the lower resolution and the resultant motion vectors are used as a basic motion estimate in other higher resolutions. The translation of motion vectors across different resolution levels results in translation error or mismatches. These mismatches are identified using a novel approach, which uses two thresholds. The first threshold is used to determine the possible occurrence of mismatches in a given video frame subject to the motion in the previous frame. This helps to give a broader location of all the mismatches in general. In order to specifically focus on the worst mismatches among them another threshold is used. This gives a more accurate identification of the mismatches that definitely need to be handled while the others can be waived depending upon the available resources. By varying these two parameters, the quality and resolution of the video can be adjusted to suit the bandwidth requirements. The next step is about handling the identified mismatches. The refinements are handled in any of the following two ways: by using motion vector correction, which gives improved prediction, or by using the directly replacing the error block. We have also presented a brief comparative study of the two error correction methods, discussing their benefits and drawbacks. The methods used here give a precise motion estimate thereby utilizing the temporal redundancy in an efficient manner and providing an effective scalability solution. This scalable framework is useful to provide a flexible multiresolution adaptation to various network and terminal capabilities, to provide quality degradation during severe network conditions, and to provide better error robustness.
Thesis (M.Eng.Sc.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2010
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Kao, Hsiang-Chieh y 高祥桔. "Fast Motion Estimation Algorithm and Its Architecture Analysis for H.264/AVC Scalable Extension IP Design". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/77242030713406273263.

Texto completo
Resumen
碩士
雲林科技大學
電子與資訊工程研究所
98
In the past few years, wireless communications with varied bandwidth have achieved innovation in the video compression technology and driven a developing standard for scalable video coding(SVC). H.264/AVC scalable extension extends from H.264/AVC for SVC. As comparing to H.264/AVC, H.264/AVC scalable extension adds three features which provide temporal, spatial and signal-to-noise ration (SNR) scalabilities. These not only allow more flexible than H.264/AVC but also cause huge encoding complexity especially for the motion vector (MV) searching. The proposed motion estimation (ME) process is extended from spatial scalability. It utilizes the property of the up-sampled residual in base layer (BL) for normal motion estimation (NME) selection or for the inter-layer residual motion estimation (ILRME) in enhancement layer (EL). The proposed ME process is applied only on NME or ILRME. Consequently, a half of the software computation or the hardware operation was saved. It combines a low memory bandwidth, low complexity, and high quality supporting integer motion estimation (IME) algorithm by group of macroblock (GOMB) and adaptive search range (ASR). Compared to paper submitted from Nation Taiwan University- Electrical Engineering Institute (NTU-EE) in 2005, the proposed architechture can save between 35% and 40% of external memory bandwidth, and up to 85% of internal memory is reduced, as for video quality, the proposed algorithms loss PSNR by 0.1 dB in average on HD (1280×720) encoding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Maharaju, Rajkumar. "Devolopment of Mean and Median Based Adaptive Search Algorithm for Motion Estimation in SNR Scalable Video Coding". Thesis, 2015. http://ethesis.nitrkl.ac.in/7725/1/2015_MT_Development_Rajkumar_Maharaju.pdf.

Texto completo
Resumen
Now a day’s quality of video in encoding is challenging in many video applications like video conferences, live streaming and video surveillance. The development of technology has resulted in invention of various devices, different network conditions and many more. This has made video coding challenging day by day. An answer to the need of all can be scalable video coding, where a single bit stream contains more than one layer known as base and enhancement layers respectively. There are various types of scalability as spatial, SNR, temporal scalability. Among these three types of scalability, SNR scalability deals with the quality of the frames i.e. base layers includes least quality frames and enhancement layer gets frames with better quality. Motion estimation is the most important aspect of video coding. Usually the adjacent frames of a video are very much similar to each other. Hence to increase the coding efficiency to remove redundancy as well as to reduce computational complexity,motion should be estimatedand compensated.Hence, in the scalable video coding, videos have been encoded in SNR scalability mode and then the motion estimation has been carried out by two proposed methods.The approach depends on eliminating the unnecessary blocks, which have not undergone motion, by taking the specific threshold value for every search region. It is desirable to reduce the time of computation to increase the efficiency but keeping in view that not at the cost of much quality. In second method, the search method has been optimized using ‘particle swarm optimization’ (PSO) technique, which is a method of computation aims at optimizing a problem with the help of popular candidate solutions.In block matching based on PSO, a swarm of particles will fly in random directions in search window of reference frame, which can be indexed by the horizontal and vertical coordinates of the center pixel of the candidate block. These algorithm mainly used to reducing the computational time by checking some random position points in the search window for finding out the best match.PSO algorithm estimate the motion with very low complexity in the context of video estimation. Both the methods have been analyzed and performance have been compared with various video sequences.The proposed technique out performs to the existing techniques in terms of computational complexity and video quality
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Li, Gwo-Long y 李國龍. "The Study of Bandwidth Efficient Motion Estimation for H.264/MPEG4-AVC Video Coding and Its Scalable Extension". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/74663865189997103364.

Texto completo
Resumen
博士
國立交通大學
電子研究所
100
In the video coding system, the overall system performance is dominated by the motion estimation module due to its high computational complexity and memory bandwidth intensive data accesses. Furthermore, with the increasing demands of high definition TV, the system performance drop caused by the intensive data bandwidth access requirement becomes even more significant. In addition, the additional adopted Inter-layer prediction modes of scalable video coding also significant increase the data access bandwidth overhead and computational complexity. To solve the high computation complexity and intensive data bandwidth access problems, this dissertation proposes several data access bandwidth and computational complexity reduction algorithms for both of integer and fractional motion estimation. First, this dissertation proposes a rate distortion bandwidth efficient motion estimation algorithm to reduce the data bandwidth requirements in integer motion estimation. In this algorithm, a mathematical model is proposed to describe the relationship between rate distortion cost and data bandwidth. Through the modeling results, a data bandwidth efficient motion estimation algorithm is thus proposed. In addition, a bandwidth aware motion estimation algorithm based on the modeling results is also proposed to efficiently allocate the data bandwidth for motion estimation under the available bandwidth constraint. Simulation results show that our proposed algorithm can achieve 78.82% data bandwidth saving. In scalable video coding standard, the additional included Inter-layer prediction modes significantly deteriorate the video system coding performance since much more data have to be accessed for the prediction purpose. Therefore, this dissertation proposes several data efficient Inter-layer prediction algorithms to lighten the intensive data bandwidth requirement problem in scalable video coding. By observing the relationship between spatial layers, several data reusing algorithms have been proposed and thus achieve more data bandwidth requirement reduction. Simulation results demonstrate that our proposed algorithm can achieve 50.55% data bandwidth reduction at least. In addition to the system performance degradation caused by intensive data bandwidth access problem, the high computational complexity of fractional motion estimation also noticeably increases the system performance drop in scalable video coding. Therefore, this dissertation proposes a mode pre-selection algorithm for fractional motion estimation in scalable video coding. In our proposed algorithm, the rate distortion cost relationship between different prediction modes are observed and analyzed first. Based on the observing and analytical results, several mode pre-selection rules are proposed to filter out the potentially skippable prediction modes. Simulation results provide that our proposed mode pre-selection algorithm can reduce 65.97% prediction modes with ignorable rate distortion performance degradation. Finally, for the video coding system performance drop problem caused by the fractional motion estimation process skipping due to hardware implementation consideration, this dissertation proposes a search range adjust algorithm to adjust the search range for the motion estimation so that the new decided search range can cover the absent reference data as much as possible for fractional motion estimation. By mathematically modeling the relationship between motion vector predictor and non-overlapping area size, the new search range can thus be adjusted. In addition, a search range aspect ratio adjust algorithm is also proposed in this dissertation by means of solving the mathematical equations. Through the proposed search range adjust algorithm, up to 90.56% of bitrate increasing can be reduced when compared to fractional motion estimation skipping mechanism. Furthermore, the proposed search range aspect ratio adjust algorithm can achieve better rate distortion performance when compared to the exhaustive search method under the same search range area constraint. In summary, through the algorithms proposed in this dissertation, not only the data access bandwidth but the computational complexity of integer and fractional motion estimation can be reduced and thus improve the overall video coding system performance significantly.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Isaac, Tobin Gregory. "Scalable, adaptive methods for forward and inverse problems in continental-scale ice sheet modeling". Thesis, 2015. http://hdl.handle.net/2152/31372.

Texto completo
Resumen
Projecting the ice sheets' contribution to sea-level rise is difficult because of the complexity of accurately modeling ice sheet dynamics for the full polar ice sheets, because of the uncertainty in key, unobservable parameters governing those dynamics, and because quantifying the uncertainty in projections is necessary when determining the confidence to place in them. This work presents the formulation and solution of the Bayesian inverse problem of inferring, from observations, a probability distribution for the basal sliding parameter field beneath the Antarctic ice sheet. The basal sliding parameter is used within a high-fidelity nonlinear Stokes model of ice sheet dynamics. This model maps the parameters "forward" onto a velocity field that is compared against observations. Due to the continental-scale of the model, both the parameter field and the state variables of the forward problem have a large number of degrees of freedom: we consider discretizations in which the parameter has more than 1 million degrees of freedom. The Bayesian inverse problem is thus to characterize an implicitly defined distribution in a high-dimensional space. This is a computationally demanding problem that requires scalable and efficient numerical methods be used throughout: in discretizing the forward model; in solving the resulting nonlinear equations; in solving the Bayesian inverse problem; and in propagating the uncertainty encoded in the posterior distribution of the inverse problem forward onto important quantities of interest. To address discretization, a hybrid parallel adaptive mesh refinement format is designed and implemented for ice sheets that is suited to the large width-to-height aspect ratios of the polar ice sheets. An efficient solver for the nonlinear Stokes equations is designed for high-order, stable, mixed finite-element discretizations on these adaptively refined meshes. A Gaussian approximation of the posterior distribution of parameters is defined, whose mean and covariance can be efficiently and scalably computed using adjoint-based methods from PDE-constrained optimization. Using a low-rank approximation of the covariance of this distribution, the covariance of the parameter is pushed forward onto quantities of interest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía