Thèses sur le sujet « Distortion estimation »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Distortion estimation.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 45 meilleures thèses pour votre recherche sur le sujet « Distortion estimation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Mitikiri, Praveen Kumar. « Rate distortion analysis for conditional motion estimation ». Thesis, Wichita State University, 2008. http://hdl.handle.net/10057/2010.

Texte intégral
Résumé :
Rate Distortion analysis is a branch of information theory that predicts the tradeoffs between rate and distortion in source coding. In this thesis, we present the rate distortion analysis for conditional motion estimation, a process that estimates motion based on a criterion that affects coding rate, complexity of coding scheme and quality of the reconstructed video. In order to guide the rate distortion analysis, we use a conditional motion estimation scheme that estimates motion for certain blocks selected based on significant changes. We begin by explaining the conditional motion estimation technique and the effect of decision criteria on the technique. We then model the motion vectors as Gaussian-Markov process and study the rate distortion tradeoffs in the video encoding scheme. The rate distortion bound derived in this manner is also validated with a practical approach.
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering
Includes bibliographic references (leaves 28-31)
Styles APA, Harvard, Vancouver, ISO, etc.
2

Mitikiri, Praveen Kumar Namuduri Kamiswara. « Rate distortion analysis for conditional motion estimation ». A link to full text of this thesis in SOAR, 2008. http://hdl.handle.net/10057/2010.

Texte intégral
Résumé :
Thesis (M.S.)--Wichita State University, College of Engineering, Dept. of Electrical and Computer Engineering.
Copyright 2008 by Praveen Kumar Mitikiri. All Rights Reserved. Includes bibliographical references (leaves 28-31).
Styles APA, Harvard, Vancouver, ISO, etc.
3

Smith, Katherine Nicole. « New Methodology for the Estimation of StreamVane Design Flow Profiles ». Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/82039.

Texte intégral
Résumé :
Inlet distortion research has become increasingly important over the past several years as demands for aircraft flight efficiency and performance has increased. To accommodate these demands, research progression has shifted the emphasis onto airframe-engine integration and improved understanding of engine operability in less than ideal conditions. Swirl distortion, which is considered a type of non-uniform inflow inlet distortion, is characterized by the presence of swirling flow in an inlet. The presence of swirling flow entering an engine can affect the compression systems performance and operability, therefore it is an area of current research. A swirl distortion generation device created by Virginia Tech, identified as the StreamVane, has the ability to produce various swirl distortion flow profiles. In its current state, the StreamVane methodology generates a design swirl distortion at the trailing edge of the device. However, in many applications the plane at which the researcher wants a desired distortion is downstream of the StreamVane trailing edge. After the distortion is discharged from the StreamVane it develops as it moves downstream. Therefore, to more accurately replicate a desired swirl distortion at a given downstream plane, distortion development downstream of the StreamVane must be considered. Currently Virginia Tech utilizes a numerical modeling design tool, designated StreamFlow, that generates predictions of how a StreamVane-generated distortion propagates downstream. However, due to the non-linear physics of the flow problem, StreamFlow cannot directly calculate an accurate inverse solution that can predict upstream conditions from a downstream boundary, as needed to design a StreamVane. To solve this problem, in this research, an efficient estimation process has been created, combining the use of the StreamFlow model with a Markov Chain Monte Carlo (MCMC) parameter estimation tool to estimate upstream flow profiles that will produce the desired downstream profiles. The process is designated the StreamFlow-MC2 Estimation Process. The process was tested on four fundamental types of swirl distortions. The desired downstream distortion was input into the estimation process to predict an upstream profile that would create the desired downstream distortion. Using the estimated design profiles, 6-inch diameter StreamVanes were designed then wind tunnel tested to verify the distortion downstream. Analysis and experimental results show that using this method, the upstream distortion needed to create the desired distortion was estimated with excellent accuracy. Based on those results, the StreamFlow-MC2 Estimation Process was validated.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kakarala, Avinash. « Hardware Implementation Of Conditional Motion Estimation In Video Coding ». Thesis, University of North Texas, 2011. https://digital.library.unt.edu/ark:/67531/metadc103341/.

Texte intégral
Résumé :
This thesis presents the rate distortion analysis of conditional motion estimation, a process in which motion computation is restricted to only active pixels in the video. We model active pixels as independent and identically distributed Gaussian process and inactive pixels as Gaussian-Markov process and derive the rate distortion function based on conditional motion estimation. Rate-Distortion curves for the conditional motion estimation scheme are also presented. In addition this thesis also presents the hardware implementation of a block based motion estimation algorithm. Block matching algorithms are difficult to implement on FPGA chip due to its complexity. We implement 2D-Logarithmic search algorithm to estimate the motion vectors for the image. The matching criterion used in the algorithm is Sum of Absolute Differences (SAD). VHDL code for the motion estimation algorithm is verified using ISim and is implemented using Xilinx ISE Design tool. Synthesis results for the algorithm are also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Zhao, Zhanlue. « Performance Appraisal of Estimation Algorithms and Application of Estimation Algorithms to Target Tracking ». ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/394.

Texte intégral
Résumé :
This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or lineae minimum mean square error (LMMSE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by Dr. X. Rong Li. Based on the so-called quasirecursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Bavikadi, Sathwika, et Venkata Bharath Botta. « Estimation and Correction of the Distortion in Forensic Image due to Rotation of the Photo Camera ». Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15965.

Texte intégral
Résumé :
Images, in contrast to text, represent an effective and natural communication media for humans, due to their immediacy and the easy way to understand the image content. Shape recognition and pattern recognition are one of the most important tasks in the image processing. Crime scene photos should always be in focus and there should always be a ruler be present, this will allow the investigators the ability to resize the image to accurately reconstruct the scene. Therefore, the camera must be on a grounded platform such as tripod. Due to the rotation of the camera around the camera center there exist the distortion in the image which must be minimized. The distorted image should be corrected using transformation method. Deze taak is nogal uitdagend en essentieel omdat elke verandering in de afbeeldingen kan misidentificeren een object voor onderzoekers. Forensic image processing can help the analyst extract information from low quality, noisy image or geometrically distorted. Obviously, the desired information must be present in the image although it may not be apparent or visible. Considering challenges in complex forensic investigation, we understand the importance and sensitivity of data in a forensic images.The HT is an effective technique for detecting and finding the images within noise. It is a typical method to detect or segment geometry objects from images. Specifically, the straight-line detection case has been ingeniously exploited in several applications. The main advantage of the HT technique is that it is tolerant of gaps in feature boundary descriptions and is relatively unaffected by image noise. The HT and its extensions constitute a popular and robust method for extracting analytic curves. HT   attracted a lot of research efforts over the decades. The main motivations behind such interest are the noise immunity, the ability to deal with occlusion, and the expandability of the transform. Many variations of it have evolved. They cover a whole spectrum of shape detection from lines to irregular shapes. This master thesis presents a contribution in the field of forensic image processing. Two different approaches, Hough Line Transformation (HLT), Hough Circular Transformation (HCT) are followed to address this problem. Fout estimatie en validatie is gedaan met de hulp van root mean square method. De prestatie van beide methoden is geëvalueerd door ze te vergelijken. We present our solution as an application to the MATLAB environment, specifically designed to be used as a forensic tool for forensic images.
Styles APA, Harvard, Vancouver, ISO, etc.
7

ZANAJ, BLERINA. « Estimation of vital parameters through the usage of UWB radars ». Doctoral thesis, Università Politecnica delle Marche, 2013. http://hdl.handle.net/11566/242700.

Texte intégral
Résumé :
Il principale obiettivo della tesi è stato lo sviluppo di un insieme di algoritmi per la valutazione di parametri vitali, come il respiro ed il battito cardiaco, utilizzando un radar UWB. La ricerca fa parte di un progetto più esteso, denominato NIMURRA (Non Invasive Monitoring by Ultra Wide Band Radar of Respiratory Activity of people inside a spatial environment), che il Dipartimento di Ingegneria dell’Informazione dell’Università Politecnica delle Marche ha messo a punto per conto dell’Agenzia Spaziale Italiana (ASI). Al progetto hanno contribuito anche gruppi di ricerca provenienti da altre università e alcune aziende che operano nel settore aerospaziale. Ne sistema basato su radar UWB, l’antenna trasmette una sequenza di impulsi ultracorti verso il soggetto monitorato e l’informazione cercata viene ottenuta attraverso opportuna elaborazione del segnale ricevuto. Quest’ultimo contiene anche segnali riflessi dovuti agli altri oggetti nell’ambiente simulato (clutter). Di essi si tiene conto attraverso un’adeguata caratterizzazione della risposta impulsiva del sistema. L’algoritmo deve eliminare i contributi dovuti al clutter, isolando l’eco generato dal torace umano. Dopo l’eliminazione del clutter, l’algoritmo identifica i punti di massimo dell’energia del segnale. E’ in corrispondenza di questi istanti, infatti, che risulta conveniente effettuare un’analisi in frequenza, basata sulla trasformata di Fourier o di sue versioni ottimizzate. Tale metodo consente di identificare la frequenza di respirazione dalla semplice rivelazione del picco dello spettro. La procedura è stata analizzata ed implementata sia dal punto di vista della modellizzazione analitica che della simulazione numerica. Oltre alla frequenza di respirazione, risulta anche auspicabile poter ricostruire lo spostamento della cavità toracica. Anche questo risultato può essere ottenuto attraverso lo sviluppo di opportuni algoritmi di elaborazione dei segnali. Tra le tecniche utilizzabili allo scopo, si rivela particolarmente efficace eseguire la correlazione tra il segnale ricevuto ed un segnale locale opportunamente scelto. La ricostruzione dello spostamento toracico richiede, ovviamente, un incremento dei tempi di elaborazione, anche se, come sottoprodotto, essa consente di ottenere anche la stima della frequenza di respirazione. La valutazione della frequenza cardiaca è, di norma, molto più complessa, in quanto le armoniche dovute al battito cardiaco sono “mascherate” dalla frequenza di respirazione (ed i suoi multipli) e dai prodotti di intermodulazione. Nell’ambito della tesi, sono comunque state messe a punto opportune operazioni di filtraggio, e valutata la loro efficacia in funzione dei valori relativi delle frequenze di interesse. Come accennato più sopra, una parte rilevante dell’attività di ricerca ha riguardato la messa a punto di programmi, in ambiente Matlab© per la simulazione e l’elaborazione dei segnali negli scenari di interesse. Accanto allo schema più convenzionale (e normalmente adottato, ad esempio nell’ambito del progetto NIMURRA) che prevede l’utilizzo di un radar esterno (a parete) si è anche studiato, pur con minor dettaglio, il caso di radar impiantato negli indumenti del soggetto sotto misura (e dunque solidale con la cavità toracica). Le caratteristiche essenziali del problema restano immutate, ma occorre tener conto del fatto che in questo caso il segnale utile è fornito dagli oggetti presenti nell’ambiente. Infine, si fatto il caso di soggetto sotto misura che si muove, in accordo con leggi deterministiche o aleatorie, compiendo brevi spostamenti nell’intorno della posizione di riferimento. In questo ulteriore scenario, il problema principale consiste nell’eliminazione del contributo nel segnale riflesso dovuto al movimento, e per raggiungere questo obiettivo l’algoritmo di elaborazione del segnale è stato opportunamente modificato. Gli algoritmi sono stati verificati anche a partire da misure reali effettuate presso i laboratori dell’Università La Sapienza di Roma, mostrando, in gran parte dei casi, un’ottima corrispondenza con i risultati di misure convenzionali (Es.: spirometro, per la valutazione della frequenza di respirazione).
This work purpose is the building of an algorithm that estimate the vital parameters from the received signal of an UWB antenna. The research was part of a large project found by the Italian Space Agency, where are included different research groups from different universities in Italy beside the group at Università Politecnica delle Marche. The algorithm was thought to measure the vital parameters of the astronauts before, during and after their mission. The project was entitled NIMURRA (Non Invasive Monitoring by Ultra wide band Radar of Respiratory Activity of people inside a spatial environment). The antenna transmits toward the person and due to the lack of directivity of it we will have the reflected waves also from the other objects. The environment and person reflected signals will be modeled firstly by the impulsive response. By performing a convolution with the reflected pulse we create the simulation matrix. The reflected signal from the human chest reaches the antenna attenuated by the propagation of it in air and distorted by the multiple reflections of the inner tissues of human body. The receiving antenna will gather also the other contributions from the static objects. These last contributions create the static clutter. The algorithm needs to eliminate these other contributions in order to let only the human chest echo. After elimination of the clutter it searches for the maximum of the energy of the signal. Performing a transformation of the column that holds the maximum of energy we find that the harmonic with the highest peak is that of the breath frequency. Another vital parameter of interest is the amplitude of the chest displacement. To make an estimation of the amplitude of the chest movement we need to reconstruct the chest movement form from the received signal. It can be done by performing a correlation between the received signal and a chosen signal that we call as reference signal. The estimation of breath frequency can be estimated also by making the transformation of the chest movement reconstructed and the peak with the highest energy belong to breath frequency. Heart frequency was another parameter of interest for us but its detection comes out a little difficult as it is hidden by the breath harmonics and intermodulation harmonics. The analytical study and modeling were transferred into Matlab code. This algorithm estimates the breath frequency and reconstructs the chest movement. It is possible to choose between the different scenarios of realization of measurements with UWB radar. The scenarios developed were two other beside the main branch with which we started. These other scenarios were when the antenna is on the person body and moves with the chest, it transmits toward the other objects around the person that holds it. The last scenario was when the person under observation with UWB performs small movements. The estimation of breath in this case will follow another path when firstly now we need to estimate the movement of the person. Then this movement is subtracted, by doing so we leave only the moving of the chest during respiration. The estimation of the breath follows after the same algorithm developed for the case of a standing person while the antenna radiates toward him.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Toivonen, T. (Tuukka). « Efficient methods for video coding and processing ». Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514286957.

Texte intégral
Résumé :
Abstract This thesis presents several novel improvements to video coding algorithms, including block-based motion estimation, quantization selection, and video filtering. Most of the presented improvements are fully compatible with the standards in general use, including MPEG-1, MPEG-2, MPEG-4, H.261, H.263, and H.264. For quantization selection, new methods are developed based on the rate-distortion theory. The first method obtains locally optimal frame-level quantization parameter considering frame-wise dependencies. The method is applicable to generic optimization problems, including also motion estimation. The second method, aimed at real-time performance, heuristically modulates the quantization parameter in sequential frames improving significantly the rate-distortion performance. It also utilizes multiple reference frames when available, as in H.264. Finally, coding efficiency is improved by introducing a new matching criterion for motion estimation which can estimate the bit rate after transform coding more accurately, leading to better motion vectors. For fast motion estimation, several improvements on prior methods are proposed. First, fast matching, based on filtering and subsampling, is combined with a state-of-the-art search strategy to create a very quick and high-quality motion estimation method. The successive elimination algorithm (SEA) is also applied to the method and its performance is improved by deriving a new tighter lower bound and increasing it with a small constant, which eliminates a larger part of the candidate motion vectors, degrading quality only insignificantly. As an alternative, the multilevel SEA (MSEA) is applied to the H.264-compatible motion estimation utilizing efficiently the various available block sizes in the standard. Then, a new method is developed for refining the motion vector obtained from any fast and suboptimal motion estimation method. The resulting algorithm can be easily adjusted to have the desired tradeoff between computational complexity and rate-distortion performance. For refining integer motion vectors into half-pixel resolution, a new very quick but accurate method is developed based on the mathematical properties of bilinear interpolation. Finally, novel number theoretic transforms are developed which are best suited for two-dimensional image filtering, including image restoration and enhancement, but methods are developed with a view to the use of the transforms also for very reliable motion estimation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Barr, Michael. « The Influence of the Projected Coordinate System on Animal Home Range Estimation Area ». Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5343.

Texte intégral
Résumé :
Animal home range estimations are important for conservation planning and protecting the habitat of threatened species. The accuracy of home range calculations is influenced by the map projection chosen in a geographic information system (GIS) for data analysis. Different methods of projection will distort spatial data in different ways, so it is important to choose a projection that meets the needs of the research. The large number of projections in use today and the lack of distortion comparison between the various types make selecting the most appropriate projection a difficult decision. The purpose of this study is to quantify and compare the amount of area distortion in animal home range estimations when projected into a number of projected coordinate systems in order to understand how the chosen projection influences analysis. The objectives of this research are accomplished by analyzing the tracking data of four species from different regions in North and South America. The home range of each individual from the four species datasets is calculated using the Characteristic Hull Polygon method for home range estimation and then projected into eight projected coordinate systems of various scales and projection type, including equal area, conformal, equidistant, and compromise projections. A continental Albers Equal Area projection is then used as a baseline area for the calculation of a distortion measurement ratio and magnitude of distortion statistic. The distortion measurement ratio and magnitude calculations provide a measurement of the quantity of area distortion caused by a projection. Results show the amount distortion associated with each type of projection method and how the amount of distortion changes for a projection based on geographic location. These findings show how the choice of map projection can have a large influence on data analysis and illustrate the importance of using an appropriate PCS for the needs of a given study. Distorted perceptions can influence decision-making, so it is important to recognize how a map projection can influence the analysis and interpretation of spatial data.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Li, Junlin. « Distributed estimation in resource-constrained wireless sensor networks ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26633.

Texte intégral
Résumé :
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Ghassan AlRegib; Committee Member: Elliot Moore; Committee Member: Monson H. Hayes; Committee Member: Paul A. Work; Committee Member: Ying Zhang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tian, Yuandong. « Theory and Practice of Globally Optimal Deformation Estimation ». Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/269.

Texte intégral
Résumé :
Nonrigid deformation modeling and estimation from images is a technically challenging task due to its nonlinear, nonconvex and high-dimensional nature. Traditional optimization procedures often rely on good initializations and give locally optimal solutions. On the other hand, learning-based methods that directly model the relationship between deformed images and their parameters either cannot handle complicated forms of mapping, or suffer from the Nyquist Limit and the curse of dimensionality due to high degrees of freedom in the deformation space. In particular, to achieve a worst-case guarantee of ∈ error for a deformation with d degrees of freedom, the sample complexity required is O(1/∈d). In this thesis, a generative model for deformation is established and analyzed using a unified theoretical framework. Based on the framework, three algorithms, Data-Driven Descent, Top-down and Bottom-up Hierarchical Models, are designed and constructed to solve the generative model. Under Lipschitz conditions that rule out unsolvable cases (e.g., deformation of a blank image), all algorithms achieve globally optimal solutions to the specific generative model. The sample complexity of these methods is substantially lower than that of learning-based approaches, which are agnostic to deformation modeling. To achieve global optimality guarantees with lower sample complexity, the structureembedded in the deformation model is exploited. In particular, Data-driven Descentrelates two deformed images that are far away in the parameter space by compositionalstructures of deformation and reduce the sample complexity to O(Cd log 1/∈).Top-down Hierarchical Model factorizes the local deformation into patches once theglobal deformation has been estimated approximately and further reduce the samplecomplexity to O(Cd/1+C2 log 1/∈). Finally, the Bottom-up Hierarchical Model buildsrepresentations that are invariant to local deformation. With the representations, theglobal deformation can be estimated independently of local deformation, reducingthe sample complexity to O((C/∈)d0) (d0 ≪ d). From the analysis, this thesis showsthe connections between approaches that are traditionally considered to be of verydifferent nature. New theoretical conjectures on approaches like Deep Learning, arealso provided. practice, broad applications of the proposed approaches have also been demonstrated to estimate water distortion, air turbulence, cloth deformation and human pose with state-of-the-art results. Some approaches even achieve near real-time performance. Finally, application-dependent physics-based models are built with good performance in document rectification and scene depth recovery in turbulent media.
Styles APA, Harvard, Vancouver, ISO, etc.
12

D'Orlando, Marco. « Multimedia over wireless ip networks:distortion estimation and applications ». Doctoral thesis, Università degli studi di Trieste, 2008. http://hdl.handle.net/10077/2583.

Texte intégral
Résumé :
2006/2007
This thesis deals with multimedia communication over unreliable and resource constrained IP-based packet-switched networks. The focus is on estimating, evaluating and enhancing the quality of streaming media services with particular regard to video services. The original contributions of this study involve mainly the development of three video distortion estimation techniques and the successive definition of some application scenarios used to demonstrate the benefits obtained applying such algorithms. The material presented in this dissertation is the result of the studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years multimedia communication over wired and wireless packet based networks is exploding. Applications such as BitTorrent, music file sharing, multimedia podcasting are the main source of all traffic on the Internet. Internet radio for example is now evolving into peer to peer television such as CoolStreaming. Moreover, web sites such as YouTube have made publishing videos on demand available to anyone owning a home video camera. Another challenge in the multimedia evolution is inside the house where videos are distributed over local WiFi networks to many end devices around the house. More in general we are assisting an all media over IP revolution, with radio, television, telephony and stored media all being delivered over IP wired and wireless networks. All the presented applications require an extreme high bandwidth and often a low delay especially for interactive applications. Unfortunately the Internet and the wireless networks provide only limited support for multimedia applications. Variations in network conditions can have considerable consequences for real-time multimedia applications and can lead to unsatisfactory user experience. In fact, multimedia applications are usually delay sensitive, bandwidth intense and loss tolerant applications. In order to overcame this limitations, efficient adaptation mechanism must be derived to bridge the application requirements with the transport medium characteristics. Several approaches have been proposed for the robust transmission of multimedia packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques are based on developing efficient QoS architectures at the network layer or at the data link layer where routers or specialized devices apply different forwarding behaviors to packets depending on the value of some field in the packet header. Using such network architecture, video packets are assigned to classes, in order to obtain a different treatment by the network; in particular, packets assigned to the most privileged class will be lost with a very small probability, while packets belonging to the lowest priority class will experience the traditional best–effort service. But the key problem in this solution is how to assign optimally video packets to the network classes. One way to perform the assignment is to proceed on a packet-by-packet basis, to exploit the highly non-uniform distortion impact of compressed video. Working on the distortion impact of each individual video packet has been shown in recent years to deliver better performance than relying on the average error sensitivity of each bitstream element. The distortion impact of a video packet can be expressed as the distortion that would be introduced at the receiver by its loss, taking into account the effects of both error concealment and error propagation due to temporal prediction. The estimation algorithms proposed in this dissertation are able to reproduce accurately the distortion envelope deriving from multiple losses on the network and the computational complexity required is negligible in respect to those proposed in literature. Several tests are run to validate the distortion estimation algorithms and to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained using the developed algorithms. The packet distortion impact is inserted in each video packet and transmitted over the network where specialized agents manage the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily the distortion impact estimated by the transmitter. The results obtained will show that, in each scenario, a significant improvement may be obtained with respect to traditional transmission policies. The thesis is organized in two parts. The first provides the background material and represents the basics of the following arguments, while the other is dedicated to the original results obtained during the research activity. Referring to the first part in the first chapter it summarized an introduction to the principles and challenges for the multimedia transmission over packet networks. The most recent advances in video compression technologies are detailed in the second chapter, focusing in particular on aspects that involve the resilience to packet loss impairments. The third chapter deals with the main techniques adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in network adaptive media transport detailing the techniques that prioritize the video packet flow. The fifth chapter makes a literature review of the existing distortion estimation techniques focusing mainly on their limitation aspects. The second part of the thesis describes the original results obtained in the modelling of the video distortion deriving from the transmission over an error prone network. In particular, the sixth chapter presents three new distortion estimation algorithms able to estimate the video quality and shows the results of some validation tests performed to measure the accuracy of the employed algorithms. The seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side. Finally, the eight chapter summarizes the thesis contributions and remarks the most important conclusions. It also derives some directions for future improvements. The intent of the entire work presented hereafter is to develop some video distortion estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.
Questa tesi di dottorato affronta il problema della trasmissione efficiente di contenuti multimediali su reti a pacchetto inaffidabili e con limitate risorse di banda. L’obiettivo è quello di ideare alcuni algoritmi in grado di predire l’andamento della qualità del video ricevuto da un utente e successivamente ideare alcune tecniche in grado di migliorare l’esperienza dell’utente finale nella fruizione dei servizi video. In particolare i contributi originali del presente lavoro riguardano lo sviluppo di algoritmi per la stima della distorsione e l’ideazione di alcuni scenari applicativi in molto frequenti dove poter valutare i benefici ottenibili applicando gli algoritmi di stima. I contributi presentati in questa tesi di dottorato sono il risultato degli studi compiuti con il gruppo di Telecomunicazioni del Dipartimento di Elettrotecnica Elettronica ed Informatica (DEEI) dell’Università degli Studi di Trieste durante il corso di dottorato in Ingegneria dell’Informazione. Negli ultimi anni la multimedialità, diffusa sulle reti cablate e wireless, sta diventando parte integrante del modo di utilizzare la rete diventando di fatto il fenomeno più imponente. Applicazioni come BitTorrent, la condivisione di file musicali e multimediali e il podcasting ad esempio costituiscono una parte significativa del traffico attuale su Internet. Quelle che negli ultimi anni erano le prime radio che trsmettevano sulla rete oggi si stanno evolvendo nei sistemi peer to peer per più avanzati per la diffusione della TV via web come CoolStreaming. Inoltre siti web come YouTube hanno costruito il loro business sulla memorizzazione/ distribuzione di video creati da chiunque abbia una semplice video camera. Un’altra caratteristica dell’imponente rivoluzione multimediale a cui stiamo assistendo è la diffusione dei video anche all’interno delle case dove i contenuti multimediali vengono distribuiti mediante delle reti wireless locali tra i vari dispositivi finali. Tutt’oggi è in corso una rivoluzione della multimedialità sulle reti IP con le radio, i televisioni, la telefonia e tutti i video che devono essere distribuiti sulle reti cablate e wireless verso utenti eterogenei. In generale la gran parte delle applicazioni multimediali richiedono una banda elevata e dei ritardi molto contenuti specialmente se le applicazioni sono di tipo interattivo. Sfortunatamente le reti wireless e Internet più in generale sono in grado di fornire un supporto limitato alle applicazioni multimediali. La variabilità di banda, di ritardo e nella perdita possono avere conseguenze gravi sulla qualità con cui viene ricevuto il video e questo può portare a una parziale insoddisfazione o addirittura alla rinuncia della fruizione da parte dell’utente finale. Le applicazioni multimediali sono spesso sensibili al ritardo e con requisiti di banda molto stringenti ma di fatto rimango tolleranti nei confronti delle perdite che possono avvenire durante la trasmissione. Al fine di superare le limitazioni è necessario sviluppare dei meccanismi di adattamento in grado di fare da ponte fra i requisiti delle applicazioni multimediali e le caratteristiche offerte dal livello di trasporto. Diversi approcci sono stati proposti in passato in letteratura per migliorare la trasmissione dei pacchetti riducendo le perdite; gli approcci variano dalle soluzioni di compressione efficiente all’aggiunta di ridondanza con tecniche di forward error correction e ritrasmissioni. Altre tecniche si basano sulla creazione di architetture di rete complesse in grado di garantire la QoS a livello rete dove router oppure altri agenti specializzati applicano diverse politiche di gestione del traffico in base ai valori contenuti nei campi dei pacchetti. Mediante queste architetture il traffico video viene marcato con delle classi di priorità al fine di creare una differenziazione nel traffico a livello rete; in particolare i pacchetti con i privilegi maggiori vengono assegnati alle classi di priorità più elevate e verranno persi con probabilità molto bassa mentre i pacchetti appartenenti alle classi di priorità inferiori saranno trattati alla stregua dei servizi di tipo best-effort. Uno dei principali problemi di questa soluzione riguarda come assegnare in maniera ottimale i singoli pacchetti video alle diverse classi di priorità. Un modo per effettuare questa classificazione è quello di procedere assegnando i pacchetti alle varie classi sulla base dell’importanza che ogni pacchetto ha sulla qualità finale. E’ stato dimostrato in numerosi lavori recenti che utilizzando come meccanismo per l’adattamento l’impatto sulla distorsione finale, porta significativi miglioramenti rispetto alle tecniche che utilizzano come parametro la sensibilità media del flusso nei confronti delle perdite. L’impatto che ogni pacchetto ha sulla qualità può essere espresso come la distorsione che viene introdotta al ricevitore se il pacchetto viene perso tenendo in considerazione gli effetti del recupero (error concealment) e la propagazione dell’errore (error propagation) caratteristica dei più recenti codificatori video. Gli algoritmi di stima della distorsione proposti in questa tesi sono in grado di riprodurre in maniera accurata l’inviluppo della distorsione derivante sia da perdite isolate che da perdite multiple nella rete con una complessità computazionale minima se confrontata con le più recenti tecniche di stima. Numerose prove sono stati effettuate al fine di validare gli algoritmi di stima e misurare l’influenza dei principali parametri di codifica e di decodifica. Al fine di enfatizzare i benefici ottenuti applicando gli algoritmi di stima della distorsione, durante la tesi verranno presentati alcuni scenari applicativi dove l’applicazione degli algoritmi proposti migliora sensibilmente la qualità finale percepita dagli utenti. Tali scenari verranno descritti, implementati e accuratamente valutati. In particolare, la distorsione stimata dal trasmettitore verrà incapsulata nei pacchetti video e, trasmessa nella rete dove agenti specializzati potranno agevolmente estrarla e utilizzarla come meccanismo rate-distortion per privilegiare alcuni pacchetti a discapito di altri. In particolare la struttura interna di un agente (un router) verrà modificata al fine di consentire la differenziazione del traffico utilizzando l’informazione dell’impatto che ogni pacchetto ha sulla qualità finale. I risultati ottenuti anche in termini di ridotta complessità computazionale in ogni scenario applicativo proposto mettono in luce i benefici derivanti dall’implementazione degli algoritmi di stima. La presenti tesi di dottorato è strutturata in due parti principali; la prima fornisce il background e rappresenta la base per tutti gli argomenti trattati nel seguito mentre la seconda parte è dedicata ai contributi originali e ai risultati ottenuti durante l’intera attività di ricerca. In riferimento alla prima parte in particolare un’introduzione ai principi e alle opportunità offerte dalla diffusione dei servizi multimediali sulle reti a pacchetto viene esposta nel primo capitolo. I progressi più recenti nelle tecniche di compressione video vengono esposti dettagliatamente nel secondo capitolo che si focalizza in particolare solo sugli aspetti che riguardano le tecniche per la mitigazione delle perdite. Il terzo capitolo introduce le principali tecniche per proteggere i flussi multimediali e ridurre le perdite causate dai fenomeni caratteristici del canale. Il quarto capitolo descrive i recenti avanzamenti nelle tecniche di network adaptive media transport illustrando i principali metodi utilizzati per differenziare il traffico video. Il quinto capitolo analizza i principali contributi nella letteratura sulle tecniche di stima della distorsione e si focalizza in particolare sulle limitazioni dei metodi attuali. La seconda parte della tesi descrive i contributi originali ottenuti nella modellizzazione della distorsione video derivante dalla trasmissione sulle reti con perdite. In particolare il sesto capitolo presenta tre nuovi algoritmi in grado di riprodurre fedelmente l’inviluppo della distorsione video. I numerosi test e risultati verranno proposti al fine di validare gli algoritmi e misurare l’accuratezza nella stima. Il settimo capitolo propone diversi scenari applicativi dove gli algoritmi sviluppati possono essere utilizzati per migliorare in maniera significativa la qualità percepita dall’utente finale. Infine l’ottavo capitolo sintetizza l’intero lavoro svolto e i principali risultati ottenuti. Nello stesso capitolo vengono inoltre descritti gli sviluppi futuri dell’attività di ricerca. L’obiettivo dell’intero lavoro presentato è quello di mostrare i benefici derivanti dall’utilizzo di nuovi algoritmi per la stima della distorsione e di fornire alcuni scenari applicativi di utilizzo.
XIX Ciclo
1978
Styles APA, Harvard, Vancouver, ISO, etc.
13

Zadeh, Ramin Agha. « Performance control of distributed generation using digital estimation of signal parameters ». Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/47011/1/Ramin_Agha_Zadeh_Thesis.pdf.

Texte intégral
Résumé :
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Silva, Leandro Rodrigues Manso. « Inteligência computacional aplicada à modelagem de cargas não-lineares e estimação de contribuição harmônica ». Universidade Federal de Juiz de Fora (UFJF), 2012. https://repositorio.ufjf.br/jspui/handle/ufjf/4156.

Texte intégral
Résumé :
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-04-24T17:21:05Z No. of bitstreams: 1 leandrorodriguesmansosilva.pdf: 691785 bytes, checksum: 4024e0e319f1469cc354c2c346a90dbe (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-04-24T17:59:43Z (GMT) No. of bitstreams: 1 leandrorodriguesmansosilva.pdf: 691785 bytes, checksum: 4024e0e319f1469cc354c2c346a90dbe (MD5)
Made available in DSpace on 2017-04-24T17:59:43Z (GMT). No. of bitstreams: 1 leandrorodriguesmansosilva.pdf: 691785 bytes, checksum: 4024e0e319f1469cc354c2c346a90dbe (MD5) Previous issue date: 2012-02-29
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
A distorção harmônica, dentre outras formas de poluição na rede de sistemas de energia, é um importante problema para as concessionárias. De fato, o aumento do uso de dispositivos não-lineares na indústria resultou em um aumento direto da distorção harmônica nos sistemas elétricos de potência nos últimos anos. Com isso, a modelagem destas cargas e suas interações se tornaram de grande importância, e portanto, o uso de novas técnicas computacionais passou a ser de grande interesse para este fim. Neste contexto, este trabalho descreve uma metodologia baseada em técnicas de Inteligência Computacional (Redes Neurais Artificiais (RNA)s e Lógica Fuzzy (LF)), proposta para modelagem de cargas não-lineares presentes em sistemas elétricos de potência, bem como a estimação de sua parcela na distorção harmônica do sistema. A principal vantagem deste método é que apenas as formas de onda de tensão e corrente no ponto de acoplamento comum precisam ser medidas, além disso esta técnica pode ser aplicada na modelagem de cargas monofásicas bem como cargas trifásicas.
The harmonic distortin, among other forms of pollution to the electric power systems is an important issue for electric utilities. In fact, the increased use of nonlinear devices in industry has resulted in direct increase of harmonic distortion in industrial power grids in recent years. Thus, the modeling of these loads and the understanding of their interactions with the system have became of great importance, then the use of computational-based techniques has emerged as a suitable tool to deal with these requirements. In this context, this work describes a methodology based on Computational Intelligence (Artificial Neural Networks (ANN)s and Fuzzy Logic (FL)) for modeling nonlinear loads present in electric power systems, as well as the estimation of their contribution in the harmonic distortion. The main advantage of this technique is that only the waveforms of voltages and currents at the point of common coupling must be measured and it can be applied to model single and three phase loads.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Werneck, Nicolau Leal. « Analise da distorção musical de guitarras eletricas ». [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259107.

Texte intégral
Résumé :
Orientador: Furio Damiani
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-11T03:06:29Z (GMT). No. of bitstreams: 1 Werneck_NicolauLeal_M.pdf: 1556046 bytes, checksum: 677310006250588d7e6a0fb0eedfc170 (MD5) Previous issue date: 2007
Resumo: Existem diversos problemas ligados à análise de sinais musicais que podem se beneficiar de um conhecimento mais detalhado da estrutura dos sinais gerados pelos diferentes instrumentos. Entre eles se destaca a compressão de sinais baseada em áudio estruturado, onde o codificador determina a partir de um sinal parâmetros para reproduzí-lo com um sintetizador inspirado em modelos físicos dos instrumentos. Para realizar este tipo de análise e síntese é necessário conhecermos as características físicas dos instrumentos e dos sinais produzidos. Este conhecimento é ainda útil para auxiliar no desenvolvimento de instrumentos e outros equipamentos utilizados por músicos para obter o timbre desejado. Esta dissertação apresenta experimentos realizados com uma guitarra elétrica para revelar a dinâmica não-linear de suas cordas e seu filtro linear associado, comparação de sinais gravados com resultados esperados por modelos matemáticos da forma de onda, e ainda uma proposta de uma potencial técnica para a medição de parâmetros para um modelo matemático de um circuito de distorção musical, além de uma maneira de se mapear um par destes parâmetros para um espaço de maior significado psicoacústico
Resumo: Existem diversos problemas ligados à análise de sinais musicais que podem se beneficiar de um conhecimento mais detalhado da estrutura dos sinais gerados pelos diferentes instrumentos. Entre eles se destaca a compressão de sinais baseada em áudio estruturado, onde o codificador determina a partir de um sinal parâmetros para reproduzí-lo com um sintetizador inspirado em modelos físicos dos instrumentos. Para realizar este tipo de análise e síntese é necessário conhecermos as características físicas dos instrumentos e dos sinais produzidos. Este conhecimento é ainda útil para auxiliar no desenvolvimento de instrumentos e outros equipamentos utilizados por músicos para obter o timbre desejado. Esta dissertação apresenta experimentos realizados com uma guitarra elétrica para revelar a dinâmica não-linear de suas cordas e seu filtro linear associado, comparação de sinais gravados com resultados esperados por modelos matemáticos da forma de onda, e ainda uma proposta de uma potencial técnica para a medição de parâmetros para um modelo matemático de um circuito de distorção musical, além de uma maneira de se mapear um par destes parâmetros para um espaço de maior significado psicoacústico
Mestrado
Eletrônica, Microeletrônica e Optoeletrônica
Mestre em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.
16

Novanda, Happy. « Monitoring of power quality indices and assessment of signal distortions in wind farms ». Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/monitoring-of-power-quality-indices-and-assessment-of-signal-distortions-in-wind-farms(403a470c-279a-4b00-94dc-eaa2507dc579).html.

Texte intégral
Résumé :
Power quality has become one of major concerns in the power industry. It can be described as the reliability of the electric power to maintain continuity operation of end-use equipment. Power quality problems are defined as deviation of voltage or current waveforms from the ideal value. The expansion plan of wind power generation has raised concern regarding how it influences the voltage and current signals. The variability nature of wind energy and the requirements of wind power generation increase the potential problems such as frequency and harmonic distortions. In order to analyze and mitigate problems in wind power generation, it is important to monitor power quality in wind farm. Therefore, the more accurate and reliable parameter estimation methods suitable for wind power generation are needed. Three parameter estimation methods are proposed in this thesis to estimate the unknown parameters, i.e. amplitude and phase angle of fundamental and harmonic components, DC component and system frequency, during the dynamic change in wind farm. In the first method, a self-tuning procedure is introduced to least square method to increase the immunity of the algorithm to noise. In the second method, nonrecursive Newton Type Algorithm is utilised to estimate the unknown parameters by obtaining the left pseudoinverse of Jacobian matrix. In the last technique, unscented transformation is used to replace the linearization procedure to obtain mean and covariance which will be used in Kalman filter method. All of the proposed methods have been tested rigorously using computer simulated data and have shown their capability to track the unknown parameters under extreme distortions. The performances of proposed methods have also been compared using real recorded data from several wind farms in Europe and have demonstrated high correlation. This comparison has verified that UKF requires the shortest processing time and STLS requires the longest.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Elmansy, Dalia F. « Computational Methods to Characterize the Etiology of Complex Diseases at Multiple Levels ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1583416431321447.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Popa, Emil Horia. « Algorithms for handling arbitrary lineshape distortions in Magnetic Resonance Spectroscopy and Spectroscopic Imaging ». Phd thesis, Université Claude Bernard - Lyon I, 2010. http://tel.archives-ouvertes.fr/tel-00716176.

Texte intégral
Résumé :
Magnetic Resonance Spectroscopy (MRS) and Spectroscopic Imaging (MRSI) play an emerging role in clinical assessment, providing in vivo estimation of disease markers while being non-invasive and applicable to a large range of tissues. However, static magnetic field inhomogeneity, as well as eddy currents in the acquisition hardware, cause important distortions in the lineshape of acquired NMR spectra, possibly inducing significant bias in the estimation of metabolite concentrations. In the post-acquisition stage, this is classically handled through the use of pre-processing methods to correct the dataset lineshape, or through the introduction of more complex analytical model functions. This thesis concentrates on handling arbitrary lineshape distortions in the case of quantitation methods that use a metabolite basis-set as prior knowledge. Current approaches are assessed, and a novel approach is proposed, based on adapting the basis-set lineshape to the measured signal.Assuming a common lineshape to all spectral components, a new method is derived and implemented, featuring time domain local regression (LOWESS) filtering. Validation is performed on synthetic signals as well as on in vitro phantom data. Finally, a completely new approach to MRS quantitation is proposed, centred on the use of the compact spectral support of the estimated common lineshape. The new metabolite estimators are tested alone, as well as coupled with the more common residual-sum-of-squares MLE estimator, significantly reducing quantitation bias for high signal-to-noise ratio data.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Bowman, Denise Michelle. « Estimating mechanical frequency tuning properties of the cochlea with f¦1- and f¦2-sweep distortion-product otoacoustic emission measurements in normal hearing human adults ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0001/NQ34658.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kadlčík, Libor. « Implementace rekonstrukčních metod pro čtení čárového kódu ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220266.

Texte intégral
Résumé :
Bar code stores information in the form of series of bars and gaps with various widths, and therefore can be considered as an example of bilevel (square) signal. Magnetic bar codes are created by applying slightly ferromagnetic material to a substrate. Sensing is done by reading oscillator, whose frequency is modulated by presence of the mentioned ferromagnetic material. Signal from the oscillator is then subjected to frequency demodulation. Due to temperature drift of the reading oscillator, the demodulated signal is accompanied by DC drift. Method for removal of the drift is introduced. Also, drift-insensitive detection of presence of a bar code is described. Reading bar codes is complicated by convolutional distortion, which is result of spatially dispersed sensitivity of the sensor. Effect of the convolutional distortion is analogous to low-pass filtering, causing edges to be smoothed and overlapped, and making their detection difficult. Characteristics of convolutional distortion can be summarized into point-spread function (PSF). In case of magnetic bar codes, the shape of the PSF can be known in advance, but not its width of DC transfer. Methods for estimation of these parameters are discussed. The signal needs to be reconstructed (into original bilevel form) before decoding can take place. Variational methods provide effective way. Their core idea is to reformulate reconstruction as an optimization problem of functional minimization. The functional can be extended by other functionals (regularizations) in order to considerably improve results of reconstruction. Principle of variational methods will be shown, including examples of use of various regularizations. All algorithm and methods (including frequency demodulation of signal from reading oscillator) are digital. They are implemented as a program for a microcontroller from the PIC32 family, which offers high computing power, so that even blind deconvolution (when the real PSF also needs to be found) can be finished in a few seconds. The microcontroller is part of magnetic bar code reader, whose hardware allows the read information to be transferred to personal computer via the PS/2 interface or USB (by emulating key presses on virtual keyboard), or shown on display.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Hassini, Sami. « Qualification multi-critères des gammes d'usinage : application aux pièces de structure aéronautique en alliage Airware® ». Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22587/document.

Texte intégral
Résumé :
L’optimisation des gammes d'usinage n’est pas aisée, car elle souffre de deux lacunes importantes. La première est axée sur l'adaptabilité des gammes existantes aux moyens actuels de production et à leurs évolutions au fil des années pour répondre aux évolutions technologiques. Le second point concerne, l’absence de prise en compte du comportement mécanique de la pièce durant l'usinage dans l'élaboration de la gamme. Ces travaux de thèse abordent ces problématiques dans le cadre du projet FUI OFELIA. Ils étudient, dans un premier temps l'influence de la gamme d’usinage sur la déformation de la pièce. L'objectif est de pouvoir prédire le comportement mécanique de la pièce pour identifier les gammes minimisant les déformations. Le second point s'intéresse à l’évaluation multicritères des gammes de fabrication. Les critères retenus prennent en compte la déformation de la pièce, la productivité à travers une estimation rapide des temps d'usinage et la recyclabilité des copeaux obtenus lors de l'usinage. D’autre part, nous proposons un modèle géométrique des états intermédiaires de la pièce durant l’usinage pour à la fois évaluer les gammes de fabrication et conduire les calculs de simulation de la déformation de la pièce durant l’usinage
The optimization of machining sequences is not easy because it suffers from two major shortcomings. The first focuses on the adaptability of existing ranges to current production facilities and their evolution over the years to respond to technological developments. The second point concerns the lack of consideration in the mechanical behavior of the part during the development of machining sequence. This thesis addresses these in relation to the FUI OFELIA project. At first, they study the influence of the machining parameters on the deformation of the workpiece. The aim is to predict the mechanical behavior of the part to identify recommendations with minimal distortion. The second issue deals with multi-criteria evaluation of manufacturing ranges. The criteria take into account are the deformation of the workpiece, productivity through a quick estimate of machining time and recyclability of chips produced during machining. On the other hand, we propose a geometric model of the intermediate states of the workpiece during machining in order to both assess the manufacturing recommendations and to drive the simulation calculations of the deformation of the workpiece during machining
Styles APA, Harvard, Vancouver, ISO, etc.
22

Tai, Wei-Cheng, et 戴瑋呈. « Bandwidth-Rate-Distortion Optimized Motion Estimation ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/54972195086507966336.

Texte intégral
Résumé :
碩士
國立交通大學
電子工程系所
97
Motion estimation (ME) processing is the most computational and memory intensive component in H.264 encoder. However, traditional ME algorithms focus on rate and distortion performance and thus do not take memory bandwidth into consideration. Therefore, the rate and distortion performance are not optimized under bandwidth constraint. In this thesis, we propose bandwidth-rate-distortion (B-R-D) optimized ME algorithm to solve the issue mentioned above. First, we mainly propose a B-R-D optimized modeling method to determine an appropriate search range (SR) for maximizing rate distortion efficiency while can dynamically meet the available bandwidth. Then, we propose two methods, skip mode detection with content-aware scheme and SR boundary prediction method, to enhance the performance of B-R-D optimized modeling method. The skip mode detection with content-aware scheme is presented to save the most memory bandwidth and thus gives other complex MBs more bandwidth for better quality, and the SR boundary prediction method is presented to determine a feasible SR boundary for SR refinement. Compared with reference software [3], when coding in low motion sequence, the simulation result shows the proposed BRD design could improve the bandwidth saving up to 70% with almost the same performance at bit rate and PSNR under average search range size 16, and up to 84% with negligible PSNR degradation with skip design added; while coding in high motion sequence, the simulation result shows our design could save average bit rate up to 13% and at the same time increase average PSNR up to 0.1dB under low bandwidth constraint. In summary, our design could achieve the same and sometimes even better performance under various bandwidth constraints and thus it is suitable for improving ME process.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Chen, Chien-Sheng, et 陳建昇. « Partial Distortion Based Computation-Aware Motion Estimation ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/72212258995089613494.

Texte intégral
Résumé :
碩士
義守大學
資訊工程學系碩士班
99
In recent years, the development of wireless networks and multimedia application services provide solutions to achieve watching digital videos on hand-held devices. Video compression is an important technique for digital video transmission and storage, and the motion estimation is the most important part of video compression. In the early design, traditional motion estimation algorithms consider the “quality” rather than the “computation”. For hand-held devices, the computations for motion estimation could be overloaded, even using the fast algorithms, i.e., the traditional ME algorithms are not suitable to be used on the hand-held devices. This thesis considers both “quality” and “computation”, and provides a better ME. Experimental results indicate that the proposed method achieves the best performance than the other methods for hand-held devices.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Lin, Tai-Chiun, et 林泰群. « Threshold Partial Distortion Search Algorithm for Motion Estimation ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/58811629980763606682.

Texte intégral
Résumé :
碩士
大同大學
電機工程學系(所)
92
With the improvement of science and technology, the development of the network, and the exploitation of the HDTV, the demands of audio and video are increasing more and more. The Video coding technology is the solution to achieve these requirements. The purpose of motion estimation technology, which plays an important role in the video coding, is to remove the redundancy in video frame. The quality of the video is dependent on whether the motion estimation algorithm is good or not. Among all kinds of motion estimation algorithms, the full search algorithm is the simplest and the most direct method. The full search algorithm is the optimum method but it will cost very complex computation. In this thesis, we propose a new motion estimation algorithm, called threshold partial distortion search algorithm (TPDS), whose performance is comparable to the full search algorithm but with the lower computation complexity. Finally, we also propose a simple architecture to implement our algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Po, Yi Shih, et 施伯宜. « Rate-Distortion Motion Estimation Algorithms Based on Kalman Filtering ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37744605252264267167.

Texte intégral
Résumé :
碩士
義守大學
資訊工程學系
91
Motion estimation and compensation is widely used in video coding systems. To find motion vectors (MV) that lead to high compression, most conventional motion estimation approaches use a source distortion measure, such as mean-square error (MSE) or mean absolute error (MAE), as a search criterion. The resulting MV is used to generate a motion compensated prediction block and the motion compensated prediction difference frame (called residue blocks). When used in low bit-rate or very-low bit-rate video coding system, the bit rate for motion vector is more important than for residue in total bit-rate. Thus a joint rate and distortion (R-D) optimal motion estimation has been developed to achieve the trade-off between MV coding and residue coding. We present a new algorithm using Kalman filter to enhance the performance of the conventional R-D motion estimation at a relatively low computation cost. We try to amend the incorrect and/or inaccurate estimate of motion with higher precision by using Kalman filter. We first obtain a measurement of motion vector of a block by using an existing R-D motion estimation scheme. Then generate the predicted motion vector utilizing the inter-block correlation in neighboring blocks. Based on these two motion information, a simple one-dimension motion model is developed and then Kalman filter applied to obtain the optimal estimate of motion vector. Simulation results show that the proposed technique can efficiently improve the R-D motion estimation performance.
Styles APA, Harvard, Vancouver, ISO, etc.
26

吳文棋. « Adaptive motion estimation with partial distortion search for video coding ». Thesis, 2008. http://ndltd.ncl.edu.tw/handle/83977466635137439205.

Texte intégral
Résumé :
碩士
國立彰化師範大學
電子工程學系
96
With the advances of digital technologies and the proliferation of communication networks, multi-media contents consisting of both audio and video recordings continue to grow rapidly. Multi-media data generally takes huge storage space and also requires substantial amount of channel bandwidth to transmit. Many video coding standards exploit motion estimation (ME) to compress the video data by removing the inter-frame redundancy. This paper presents a new motion estimation algorithm, called the adaptive motion estimation with partial distortion search (AMEPDS). The AMEPDS algorithm exploits the information gathered from the previous frame to derive a parameter, called CP (Correlation parameter), and employs CP to classify the blocks in the current frame into potentially dependent blocks and potentially independent blocks. AMEPDS applies different motion estimation methods and adaptive search areas for different types of blocks to achieve better estimation accuracy and lower computational complexity. Early search termination is also introduced in AMEPDS to further speed up the process of motion estimation. Experimental results showed that the proposed algorithm AMEPDS can achieve a speedup of 15.58 to 150.5 times compared to the traditional full search algorithm, while maintaining similar visual quality.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Chen-YaoYeh et 葉鎮僥. « Depth Map Coding Based on Distortion Estimation of Synthesized View ». Thesis, 2014. http://ndltd.ncl.edu.tw/handle/82536959461884920139.

Texte intégral
Résumé :
碩士
國立成功大學
電腦與通信工程研究所
102
3D video becomes more and more popular recent years. The multi-view video plus depth (MVD) format assisted with depth-image-based rendering (DIBR) technique is an efficient representation of 3D video. Conventionally in depth map coding, the distortion in the rate-distortion optimization (RDO) procedure is only measured with the sum of squared differences (SSD) of depth map. But depth map is just a supplementary data for view synthesis. So the quality of depth map is not highly correlated with the quality of the synthesized view, instead we should take the quality of synthesized view into consideration in the RDO procedure. In this thesis, the relationship between the synthesized view distortion and the depth coding error is analyzed in the frequency domain. Thus an efficient depth map coding based on a new distortion metric is proposed. Then in the RDO procedure, the depth distortion is replaced by the estimated synthesized view distortion. The simulation results show that the proposed distortion metric could achieve about 45% BDBR saving for depth data compared to the conventional scheme.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Guo, Bin, et 郭賓. « Rate-Distortion-Computation Optimized Search Range Decision for Motion Estimation ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/51016122940924866625.

Texte intégral
Résumé :
碩士
國立東華大學
電機工程學系
97
In video coding standard, the motion estimation technique is widely adopted to significantly eliminate the temporal redundancies existing in video signal at the cost of intensive computational burden. To reduce the computational overhead of motion estimation, a dynamic search range decision algorithm is proposed in this thesis based on the Lagrange optimization approach which aims at optimizing the tradeoff between computational complexity and rate distortion performance. In addition, with the rapid popularity of computation resource constrained devices, computation aware design receives more and more attentions recently. To address this issue, this thesis proposes a computation aware motion estimation algorithm which adaptively allocates the computational resource to the motion estimation process according to the available computation budgets. Experimental results show that the proposed algorithms can distribute the proper computation resource to each macroblock (MB) and thus saving the encoding time up to 70% when compared to full search motion estimation algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Jackson, Edmund Stephen. « High ratio wavelet video compression through real-time rate-distortion estimation ». Thesis, 2003. http://hdl.handle.net/10413/9044.

Texte intégral
Résumé :
The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements.
Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.
Styles APA, Harvard, Vancouver, ISO, etc.
30

« Advances in end-to-end distortion estimation for error-resilient video networking ». UNIVERSITY OF CALIFORNIA, SANTA BARBARA, 2010. http://pqdtopen.proquest.com/#viewpdf?dispub=3371682.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Chien, Yu-Ming, et 簡裕民. « Suboptimal Quantization Control with Estimation of Rate- Distortion Relationship for Motion Video Coding ». Thesis, 1996. http://ndltd.ncl.edu.tw/handle/34784852886020596267.

Texte intégral
Résumé :
碩士
國立交通大學
電子研究所
84
For image and video coding under transmission constraints, determination of quantization steps at encoder plays an important role in controlling the generated bit-rate and coding performance. In this thesis, we analyze the quantization information from some Video CD titles and compare them with several known rate control algorithms. In addition, we investigate how to determine quantization steps for good coding quality. We approach this by improving on a well-known quantizer control algorithm, namely, Test Model of MPEG2. We know that the rate-distortion curve of a macroblock fully characterize the relationship between rate and distortion and can thus be used in determination of good quantization steps. Becasue of the heavy computational load required for calculating the real R-D curves, we develop a piecewise approximation model to predict the real curves. In deciding an appropriate quantization step, we first calculate a reference step on the buffer status to prevent the buffer from overflow and underflow. Next, the human vision factor and the slope of the estimated rate-distortion curve are incorporated into adjustment of this quantization step to obtain a suboptimal quantization step. Simulation reesults show that, we can attain not only higher PSNR values but also higher human visual quality in coded video.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Wang, Chih-Cheng, et 王志晟. « A Hybrid Error Concealment Technique for H.264/AVC Using Boundary Distortion Estimation ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78950349271165472578.

Texte intégral
Résumé :
碩士
國立東華大學
資訊工程學系
98
Packet loss is prone to occurrence in the transmission channel due to error-prone network environments. It usually causes serious distortion, especially in highly compressed video coding standard. The distortion not only damages to single frame but also propagates to successive frames. To improve the distortion, error concealment techniques are adopted to restore lost video data in the decoder. In this thesis, a hybrid error concealment technique for H.264/AVC based on boundary distortion estimation has been proposed. It could efficiently restore the corrupted data by appropriately switching temporal error concealment approach to spatial error concealment approach. Both of the proposed temporal and spatial error concealment approaches adopt the boundary distortion estimation procedure to be the evaluation criterion. The recovery result with minimal boundary distortion will be selected to restore the corrupted data by the proposed adaptive weight-based switching algorithm. Therefore, the proposed hybrid error concealment technique effectively improves the recovery performance and outperforms other methods, especially packet loss in scene-change frames or high motion regions. Experimental results show the proposed hybrid error concealment technique could effectively enhance the recovery performance.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Zeng, Zhi-Xiu, et 曾智修. « Predistorter Based on Frequency Domain Estimation for Compensation of Nonlinear Distortion in OFDM Systems ». Thesis, 2005. http://ndltd.ncl.edu.tw/handle/56160915341228780722.

Texte intégral
Résumé :
碩士
國立中正大學
電機工程所
93
OFDM signal is very sensitive to the nonlinear distortion mainly caused by a power amplifier (PA) as a result of its high peak-to-average power ratio (PAPR). Predistortion, an effective countermeasure for balancing off the nonlinearity of a PA, is usually necessary for the sake of mitigating in-band distortion and spectrum regrowth. In the conventional schemes, a memoryless polynomial is generally used in modelling the PA characteristic or designing the predistorter. The polynomial coefficients are solved by least square (LS) estimation or adaptive identification algorithms in the time domain. However, most of the time domain schemes are not easy to implement owing to the practical difficulty in compensating the delay effect caused by the transmission filter and the receiving filter in the feedback path. In this thesis we examine this issue from frequency domain and propose five predistortion schemes. Two different criteria are used in the proposed algorithms. The first one is based on the minimization of the square error of the PA input, termed PA-Input-LS criterion. The second one is based on the minimization of the square error between the input of the predistorter and the output of the PA, termed PA-output-LS criterion. We also propose an easy method to cope with the delay effect. Finally, the simulation of application of the proposed schemes to a 64-QAM OFDM system is presented.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Huang, Chen-Yi, et 黃振億. « Efficient Partial Distortion Search Algorithm for Motion Estimation by Dynamically Selecting the Representative Pixels ». Thesis, 2005. http://ndltd.ncl.edu.tw/handle/17771141698446361117.

Texte intégral
Résumé :
碩士
長庚大學
電機工程研究所
93
Motion estimation and Motion compensation are used in the compression standard such as MGEG x and H.26x in order to eliminate the temporal redundancy of the video sequences. But the techniques of the motion estimation need a large amount of computation to complete and waste a lot of time. The long time of the compression is not suitable in the present day. So, many fast block matching algorithms have been proposed in order to solve this problem. But most of the fast block matching algorithms degrade the image quality. In this thesis, we proposed the improved method for partial distortion algorithm to increase the speed of the motion estimation and avoid to make the quality of the image decrease. In our algorithm, the scan way of dynamically selecting is proposed. Using the scan way of dynamically selecting can find efficiently the more representative pixels. In our method, the unsuitable candidate blocks will be eliminated early. And using the scan way to find the center of the search windows, the computation complexity of the block matching will be reduced. Moreover, we have modified two skills in HGPDS. The skills are the early termination skill and spiral scan in search window respectively. The modifications make the algorithm faster. Experiments have showed that our method has better performance than previous algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Jiang, Yan-ting, et 江彥廷. « Quality Estimation for H.264/SVC Spatial Scalability based on a New Quantization Distortion Model ». Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81656358005251732964.

Texte intégral
Résumé :
碩士
國立中央大學
通訊工程研究所
99
Scalable Video Coding (SVC) provides efficient compression for the video bitstream equipped with various scalable configurations. H.264 scalable extension (H.264/SVC) is the most recent scalable coding standard. It involves state-of-the-art inter-layer prediction to provide higher coding efficiency than previous standards. Moreover, the requirements for the video quality on distinct situations like link conditions or video contents are usually different. Therefore, how to efficiently provide suitable video quality to users under different situations is an important issue. This work proposes a Quantization-Distortion (Q-D) model for H.264/SVC spatial scalability to estimate video quality before real encoding is performed. We introduce the residual decomposition for three inter-layer prediction types: residual prediction, intra prediction, and motion prediction. The residual can be decomposed to previous distortion and prior-residual that can be estimated before encoding. For single layer, they are distortion of previous frame and difference between two original frames. Then, the distortion can be modeled as a function of quantization step and prior-residual. In simulations, the proposed model can estimate the actual Q-D curves for each inter-layer prediction, and the accuracy of the model is up to 94.98%.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ciou, Jian-Hong, et 邱建宏. « New String Techniques on Partial Distortion Search for Fast Optimal Motion Estimation of the Video Encoding ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/y65nu9.

Texte intégral
Résumé :
碩士
國立臺中科技大學
資訊工程系碩士班
100
Motion estimation has been widely used in many video applications such as video compression, video segmentation, and video tracking. Efficient block matching algorithms (BMAs) have received considerable attention and have been adopted in modern video compression standards such as ISO/IEC MPEG1/2/4 and ITU-T H.263/H.264. The experimental results have shown motion estimation can consume 60% (1 reference frame) to 80% (5 reference frames) of the total encoding time of H.264 codec. The Full-Search MB-Matching (FSBM) method was first proposed to search for the best matching image MB. As this algorithm has large computational overhead, it is not suitable for real time purposes. Partial distortion search (PDS) is a good example of the fast matching method. PDS, which is a basic early termination scheme, uses the accumulated partial sum of absolute difference (SAD) to eliminate the impossible candidates of motion vector in a matching block. In this thesis, we proposed two new sorting techniques to get a better matching order on the lossless partial distortion search algorithm. In the first method, by sorting the level difference sets calculated from the approximate distortion between the two-bit transformed coding block and candidate block, the matching order can be computed to apply to the typical PDS. In the second method, we sort pixel calculation of the matching order of SAD by the difference between the block mean and inter pixel value. Experimental results show that two proposed methods can effectively reduce the computational complexity without affecting the video quality, reductions in computational complexity were achieved.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Tonini, Andrea. « Remote estimation of target height using unmanned air vehicles (UAVs) ». Master's thesis, 2019. http://hdl.handle.net/10362/84351.

Texte intégral
Résumé :
Dissertation presented as partial requirement for obtaining the Master’s degree in Information Management, with a specialization in Business Intelligence and Knowledge Management
Estimation of target height from videos is used for several applications, such as monitoring agricultural plants growth or, within surveillance scenarios, supporting the identification of persons of interest. Several studies have been conducted in this domain but, in almost all the cases, only fixed cameras were considered. Nowadays, lightweight UAVs are often employed for remote monitoring and surveillance activities due to their mobility capacity and freedom for camera orientation. This paper focuses on how the height could be swiftly performed with a gimballed camera installed into a UAV using a pinhole camera model after camera calibration and image distortion compensation. The model is tailored for UAV applications outdoor and generalized for any camera orientations defined by Euler angles. The procedure was tested with real data collected with a regular-market lightweight quad-copter. The data collected was also used to make an uncertainty analysis associated with the estimation. Finally, since the height of a person who is not standing perfectly vertical can be derived by relationships between body parts or human face features ratio, this paper proposes to retrieve the pixel spacing measured along the vertical target, called here Vertical Sample Distance (VSD), to quickly measure vertical sub-portions of the target.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Yang, Chao-Cing, et 楊趙清. « Priority-Based Partial Distortion Search Algorithm for Fast Motion Estimation in H.264/AVC Video Coding Standard ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/z5zkmv.

Texte intégral
Résumé :
碩士
國立東華大學
電機工程學系
95
The newest video coding standard called H.264 provides considerable performance improvement over a wide range of bit rates and video resolutions compared to previous standards. In H.264, the motion estimation module occupies the largest computational complexity in an encoder, and it supports seven various block sizes with a tree structured macroblock partition. In order to reduce the computational complexity of motion estimation in H.264, Priority-Based Normalized Partial Distortion Search Algorithm for Fast Motion Estimation, Adaptive Search Range Decision, Search Method Decision and Early Termination for Very Low Motion MBs are proposed in this thesis. Experimental results show that the proposed algorithms provide significant coding time improvement with only slightly PSNR degradation compared to several fast motion estimation algorithms which are implemented under the JM12.2 reference software platform.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Xu, Zhi-Xiong, et 許智雄. « Combining Dynamic Search Range with Improved Normalized Partial Distortion Search for Fast Motion Estimation Algorithm in Video Coding ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/cg67wv.

Texte intégral
Résumé :
碩士
國立臺中科技大學
資訊工程系碩士班
100
H.264/AVC is JVT video coding standard developed by a good video coding efficiency, in recent years, with the network bandwidth improvements, and advances in technology, multimedia products and services continued to emerge, such as video telephone, on-demand video system (VOD), video conferencing, high-definition digital television (HDTV), etc.. All need a good video compression standards support in the H.264/AVC encoder architecture. Motion estimated (ME) is the compression of the entire video, accounted for the largest amount of computation, this thesis has proposed a new motion estimation algorithm to reduce number of search points in motion estimation algorithm maintaining a certain visual quality. This study has by analyzed a variety of fast motion estimation algorithms. Normalized Partial Distortion Search(N PDS) algorithm is one of algorithm with better coding performance. However, NPDS in the block matching search points of order and arrangement are fixed, can not be the basis of each depending characteristics of the information screen to arrange the search order, this study proposes to combine dynamic search partial distortion search video coding fast motion estimation algorithm. This method is based on various video picture characteristics of video encoding, video screen different pixels to calculate the arrangement, based on changes in the characteristics of the screen and motion vectors of adjacent blocks to adjust the distribution range of the search window. Compare to the proposed algorithm. Experimental results show that this study can efficiently reduce video encoding of search points, and to maintain a certain video quality.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Hong, Hao-feng, et 洪浩峰. « Estimation of nonlinear distortions of loudspeakers ». Thesis, 1997. http://ndltd.ncl.edu.tw/handle/46384440177344027798.

Texte intégral
Résumé :
碩士
國立海洋大學
電機工程學系
85
The loudspeaker in a sound system has much to do with the quality of speech or music signals replayed by the system. However, it is also the weakest link of the overall sound system. This is due to the fact that nonlinear phenomena may exist in the electronic circuit, mechanical structure, and material design of a loudspeaker. This causes the loudspeaker to generate harmonic and intermodulation distortions which degrade the performance of the sound system. To be able to evaluate the goodness of a loudspeaker, an accurrate and efficient technique for estimating the nonlinear distortions is required. Conventional approaches to measure nonliner distortion parameters of a loudspeaker are time-consuming and cumbersome tasks. In this thesis, we use a Volterra model to describe the relation between the input signal and the nonlinear distortion introduced by the loudspeaker, and then formulate a parameter-measurement problem into a parameter-estimation problem. In this proposed method, we send a bandlimited noise to the loudspeaker and record both the input and the response signals. These signals are then used by an algorithm to estimate the transfer functions of the Volterra model. Once the transfer functions are estimated, the distortion parameters can be obtained. In this thesis, we adopt two parameter-estimation methods based on the Volterra model--- the frequency-domain method and the anti-aliasing frequency-domain method. Through computer simulation and practical application of measuring loudspeaker distortions, we demonstrate that the anti-aliasing frequency-domain method better estimates the nonlinear distortions than the frequency-domain method does. The estimation result indicates that the second-order distortions of a loudspeaker are more significant in low frequencies than in high frequencies. This result matches with our understanding of the loudspeaker characteristics.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Luo, Shih-Chung, et 羅世崇. « Estimating Total-Harmonic-Distortion of Analog Signal in Time-Domain ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/54418096252022565708.

Texte intégral
Résumé :
碩士
國立雲林科技大學
電子與光電工程研究所碩士班
100
In this work, we propose a methodology to estimate Total-Harmonic-Distortion (THD) in time domain. Instead of using Fast-Fourier-Transform (FFT), we express the signal under test into Taylor expansion and derive the relationship between coefficients of Taylor series and composition of harmonic. Furthermore, through analyzing syndromes on signal under test with respect to coefficients of Taylor series and extracting effective information, we calculate these coefficients and translate into harmonic distortion by Parseval’s theorem. The required instrument of proposed method is oscilloscope which is more conventional than spectrum analyzer. Especially for those low speed voice or physiology analog signals, the required spectrum analyzer is particular and expensive. In addition, the stability of THD estimation by proposed method is better than FFT which its accuracy is proportional to quantity of samples.
Styles APA, Harvard, Vancouver, ISO, etc.
42

« Estimation and correction of geometric distortions in side-scan sonar images ». Research Laboratory of Electronics, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/4196.

Texte intégral
Résumé :
Daniel T. Cobra.
Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1990.
Includes bibliographical references (p. 137-141).
Supported in part by the Defense Advanced Research Projects Agency monitored by the Office of Naval Research. N00014-89-J-1489 Supported in part by the National Science Foundation. MIP-87-14969 Supported in part by Lockheed Sanders, Inc.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Raju, Jampana V. S. « Air-Sea Flux Measurements Over The Bay Of Bengal During A Summer Monsoon ». Thesis, 2006. http://hdl.handle.net/2005/437.

Texte intégral
Résumé :
Majority of the rain producing monsoon systems in India form or intensify over the Bay of Bengal and move onto the land. We expect the air-sea interaction to be a crucial factor in the frequent genesis and intensification of monsoon systems over the Bay. Knowledge of air-sea fluxes is essential in determining the air-sea interactions. However, the Bay remains a poorly monitored ocean basin and the state of the near surface conditions during the monsoon months remains to be studied in detail. For example, we do not know yet which among the various flux formulae used in the General circulation models are appropriate over the Bay since there are no direct measurements of surface fluxes here during the peak monsoon months. The present thesis aims towards filing that gap. In this thesis fluxes were computed using the Bulk method, Inertial dissipation method and direct covariance method. The flux comparisons were reasonable during certain flow conditions which are clearly identified. When these conditions are not met the differences among the fluxes from these methods can be larger than the inherent uncertainties' in the methods. Stratification, flow distortion and averaging time are the key variables that give rise to the differences in the fluxes. It is found that there are significant differences in the surface flux estimates computed from different atmospheric General Circulation Model bulk parameterization schemes. In this thesis, the flow gradients are estimated by taking advantage of the natural pitch and roll motion of the ship. A attempt is made to gain insight into the flow distortion and its influence on the fluxes. In our analysis it is found that the displacement of the streamlines is an important component in quantifying flow distortion.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Lindsey, Laurence Francis. « Estimating the effects of lens distortion on serial section electron microscopy images ». Thesis, 2012. http://hdl.handle.net/2152/ETD-UT-2012-08-6210.

Texte intégral
Résumé :
Section to section alignment is a preliminary step to the creation of three dimensional reconstructions from serial section electron micrographs. Typically, the micrograph of one section is aligned to its neighbors by analyzing a set of fiducial points to calculate an appropriate polynomial transform. This transform is then used to map all of the pixels of the micrograph into alignment. Such transforms are usually linear or piecewise linear in order to limit the accumulation of small errors, which may occur with the use of higher–order approximations. Linear alignment is unable to correct common higher order geometric distortions, such as lens distortion in the case of TEM, and scan distortion in the case of transmission-mode SEM. Here, we attempt to show that standard calibration replicas may be used to calculate a high order distortion model despite the irregularities that are often present in them. We show that SEM scan distortion has much less of an effect than TEM lens distortion; however, the effect of TEM distortion on prior geometric measurements made over three-dimensional reconstructions of dendrites, axons, and synapses and their subcellular compartments is negligible.
text
Styles APA, Harvard, Vancouver, ISO, etc.
45

De, Michele Anne McGrath. « Estimating human cochlear traveling wave velocity using distortion product otoacoustic emission and auditory brainstem response measurements / ». 2003. http://wwwlib.umi.com/dissertations/fullcit/3073584.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie