Dissertations / Theses on the topic 'Low computational complexity algorithms'

To see the other types of publications on this topic, follow the link: Low computational complexity algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Low computational complexity algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Troiani, Chiara. "Vision-Aided Inertial Navigation : low computational complexity algorithms with applications to Micro Aerial Vehicles." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM021/document.

Full text
Abstract:
L'estimation précise du mouvement 3D d'une caméra relativement à une scène rigideest essentielle pour tous les systèmes de navigation visuels. Aujourd'hui différentstypes de capteurs sont adoptés pour se localiser et naviguer dans des environnementsinconnus : GPS, capteurs de distance, caméras, capteurs magnétiques, centralesinertielles (IMU, Inertial Measurement Unit). Afin d'avoir une estimation robuste, lesmesures de plusieurs capteurs sont fusionnées. Même si le progrès technologiquepermet d'avoir des capteurs de plus en plus précis, et si la communauté de robotiquemobile développe algorithmes de navigation de plus en plus performantes, il y aencore des défis ouverts. De plus, l'intérêt croissant des la communauté de robotiquepour les micro robots et essaim de robots pousse vers l'emploi de capteurs à bas poids,bas coût et vers l'étude d'algorithmes à faible complexité. Dans ce contexte, capteursinertiels et caméras monoculaires, grâce à leurs caractéristiques complémentaires,faible poids, bas coût et utilisation généralisée, représentent une combinaison decapteur intéressante.Cette thèse présente une contribution dans le cadre de la navigation inertielle assistéepar vision et aborde les problèmes de fusion de données et estimation de la pose, envisant des algorithmes à faible complexité appliqués à des micro-véhicules aériens.Pour ce qui concerne l'association de données, une nouvelle méthode pour estimer lemouvement relatif entre deux vues de caméra consécutifs est proposée.Celle-ci ne nécessite l'observation que d'un seul point caractéristique de la scène et laconnaissance des vitesses angulaires fournies par la centrale inertielle, sousl'hypothèse que le mouvement de la caméra appartient localement à un planperpendiculaire à la direction de la gravité. Deux algorithmes très efficaces pouréliminer les fausses associations de données (outliers) sont proposés sur la base decette hypothèse de mouvement.Afin de généraliser l'approche pour des mouvements à six degrés de liberté, deuxpoints caracteristiques et les données gyroscopiques correspondantes sont nécessaires.Dans ce cas, deux algorithmes sont proposés pour éliminer les outliers.Nous montrons que dans le cas d'une caméra monoculaire montée sur un quadrotor,les informations de mouvement fournies par l'IMU peuvent être utilisées pouréliminer de mauvaises estimations.Pour ce qui concerne le problème d'estimation de la pose, cette thèse fournit unesolution analytique pour exprimer la pose du système à partir de l'observation de troispoints caractéristiques naturels dans une seule image, une fois que le roulis et letangage sont obtenus par les données inertielles sous l'hypothèse de terrain plan.Afin d'aborder le problème d'estimation de la pose dans des environnements sombresou manquant de points caractéristiques, un système équipé d'une caméra monoculaire,d'une centrale inertielle et d'un pointeur laser est considéré. Grace à une analysed'observabilité il est démontré que les grandeurs physiques qui peuvent êtredéterminées par l'exploitation des mesures fourni par ce systeme de capteurs pendantun court intervalle de temps sont : la distance entre le système et la surface plane ;la composante de la vitesse du système qui est orthogonale à la surface ; l'orientationrelative du système par rapport à la surface et l'orientation de la surface par rapport àla gravité. Une méthode récursive simple a été proposée pour l'estimation de toutesces quantités observables.Toutes les contributions de cette thèse sont validées par des expérimentations à l'aidedes données réelles et simulées. Grace à leur faible complexité de calcul, lesalgorithmes proposés sont très appropriés pour la mise en oeuvre en temps réel surdes systèmes ayant des ressources de calcul limitées. La suite de capteur considéréeest monté sur un quadrotor, mais les contributions de cette thèse peuvent êtreappliquées à n'importe quel appareil mobile
Accurate egomotion estimation is of utmost importance for any navigation system.Nowadays di_erent sensors are adopted to localize and navigate in unknownenvironments such as GPS, range sensors, cameras, magnetic field sensors, inertialsensors (IMU). In order to have a robust egomotion estimation, the information ofmultiple sensors is fused. Although the improvements of technology in providingmore accurate sensors, and the efforts of the mobile robotics community in thedevelopment of more performant navigation algorithms, there are still openchallenges. Furthermore, the growing interest of the robotics community in microrobots and swarm of robots pushes towards the employment of low weight, low costsensors and low computational complexity algorithms. In this context inertial sensorsand monocular cameras, thanks to their complementary characteristics, low weight,low cost and widespread use, represent an interesting sensor suite.This dissertation represents a contribution in the framework of vision-aided inertialnavigation and tackles the problems of data association and pose estimation aimingfor low computational complexity algorithms applied to MAVs.For what concerns the data association, a novel method to estimate the relative motionbetween two consecutive camera views is proposed. It only requires the observationof a single feature in the scene and the knowledge of the angular rates from an IMU,under the assumption that the local camera motion lies in a plane perpendicular to thegravity vector. Two very efficient algorithms to remove the outliers of the featurematchingprocess are provided under the abovementioned motion assumption. Inorder to generalize the approach to a 6DoF motion, two feature correspondences andgyroscopic data from IMU measurements are necessary. In this case, two algorithmsare provided to remove wrong data associations in the feature-matching process. Inthe case of a monocular camera mounted on a quadrotor vehicle, motion priors fromIMU are used to discard wrong estimations.For what concerns the pose estimation problem, this thesis provides a closed formsolution which gives the system pose from three natural features observed in a singlecamera image, once the roll and the pitch angles are obtained by the inertialmeasurements under the planar ground assumption.In order to tackle the pose estimation problem in dark or featureless environments, asystem equipped with a monocular camera, inertial sensors and a laser pointer isconsidered. The system moves in the surrounding of a planar surface and the laserpointer produces a laser spot on the abovementioned surface. The laser spot isobserved by the monocular camera and represents the only point feature considered.Through an observability analysis it is demonstrated that the physical quantities whichcan be determined by exploiting the measurements provided by the aforementionedsensor suite during a short time interval are: the distance of the system from the planarsurface; the component of the system speed that is orthogonal to the planar surface;the relative orientation of the system with respect to the planar surface; the orientationof the planar surface with respect to the gravity. A simple recursive method toperform the estimation of all the aforementioned observable quantities is provided.All the contributions of this thesis are validated through experimental results usingboth simulated and real data. Thanks to their low computational complexity, theproposed algorithms are very suitable for real time implementation on systems withlimited on-board computation resources. The considered sensor suite is mounted on aquadrotor vehicle but the contributions of this dissertations can be applied to anymobile device
APA, Harvard, Vancouver, ISO, and other styles
2

Goussevskaia, Olga. "Computational complexity and scheduling algorithms for wireless networks." Konstanz Hartung-Gorre, 2009. http://d-nb.info/997891122/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Udupa, Pramod. "Algorithmes parallèles et architectures évolutives de faible complexité pour systèmes optiques OFDM cohérents temps réel." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S039/document.

Full text
Abstract:
Dans cette thèse, des algorithmes à faible complexité et des architectures parallèles et efficaces sont explorés pour les systèmes CO-OFDM. Tout d'abord, des algorithmes de faible complexité pour la synchronisation et l'estimation du décalage en fréquence en présence d'un canal dispersif sont étudiés. Un nouvel algorithme de synchronisation temporelle à faible complexité qui peut résister à grande quantité de retard dispersif est proposé et comparé par rapport aux propositions antérieures. Ensuite, le problème de la réalisation d'une architecture parallèle à faible coût est étudié et une architecture parallèle générique et évolutive qui peut être utilisée pour réaliser tout type d'algorithme d'auto-corrélation est proposé. Cette architecture est ensuite étendue pour gérer plusieurs échantillons issus du convertisseur analogique/numérique (ADC) en parallèle et fournir une sortie qui suive la fréquence des ADC. L'évolutivité de l'architecture pour un nombre plus élevé de sorties en parallèle et les différents types d'algorithmes d'auto-corrélation sont explorés. Une approche d'adéquation algorithme-architecture est ensuite appliquée à l'ensemble de la chaîne de l'émetteur-récepteur CO-OFDM. Du côté de l'émetteur, un algorithme IFFT à radix-22 est choisi pour et une architecture parallèle Multipath Delay Commutator (MDC). Feed-forward (FF) est choisie car elle consomme moins de ressources par rapport aux architectures MDC-FF en radix-2/4. Au niveau du récepteur, un algorithme efficace pour l'estimation du Integer CFO est adopté et implémenté de façon optimisée sans l'utilisation de multiplicateurs complexes. Une réduction de la complexité matérielle est obtenue grâce à la conception d'architectures efficaces pour la synchronisation temporelle, la FFT et l'estimation du CFO. Une exploration du compromis entre la précision des calculs en virgule fixe et la complexité du matériel est réalisée pour la chaîne complète de l'émetteur- récepteur, de façon à trouver des points de fonctionnement qui n'affectent pas le taux d'erreur binaire (TEB) de manière significative. Les algorithmes proposés sont validés à l'aide d'une part d'expériences off-line en utilisant un générateur AWG (arbitrary wave- form generator) à l'émetteur et un oscilloscope numérique à mémoire (DSO) en sortie de la détection cohérente au récepteur, et d'autre part un émetteur-récepteur temps-réel basé sur des plateformes FPGA et des convertisseurs numériques. Le TEB est utilisé pour montrer la validité du système intégré et en donner les performances
In this thesis, low-complexity algorithms and architectures for CO-OFDM systems are explored. First, low-complexity algorithms for estimation of timing and carrier frequency offset (CFO) in dispersive channel are studied. A novel low-complexity timing synchro- nization algorithm, which can withstand large amount of dispersive delay, is proposed and compared with previous proposals. Then, the problem of realization of low-complexity parallel architecture is studied. A generalized scalable parallel architecture, which can be used to realize any auto-correlation algorithm, is proposed. It is then extended to handle multiple parallel samples from ADC and provide outputs, which can match the input ADC rate. The scalability of the architecture for higher number of parallel outputs and different kinds of auto-correlation algorithms is explored. An algorithm-architecture approach is then applied to the entire CO-OFDM transceiver chain. At the transmitter side, radix-22 algorithm for IFFT is chosen and parallel Mul- tipath Delay Commutator (MDC) Feed-forward (FF) architecture is designed which con- sumes lesser resources compared to MDC FF architectures of radix-2/4. At the receiver side, efficient algorithm for Integer CFO estimation is adopted and efficiently realized with- out the use of complex multipliers. Reduction in complexity is achieved due to efficient architectures for timing synchronization, FFT and Integer CFO estimation. Fixed-point analysis for the entire transceiver chain is done to find fixed-point sensitive blocks, which affect bit error rate (BER) significantly. The algorithms proposed are validated using opti- cal experiments by the help of arbitrary waveform generator (AWG) at the transmitter and digital storage oscilloscope (DSO) and Matlab at the receiver. BER plots are used to show the validity of the system built. Hardware implementation of the proposed synchronization algorithm is validated using real-time FPGA platform
APA, Harvard, Vancouver, ISO, and other styles
4

Danjean, Ludovic. "Low-Complexity Iterative Reconstruction Algorithms in Compressed Sensing." International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579661.

Full text
Abstract:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
In this paper we focus on two low-complexity iterative reconstruction algorithms in compressed sensing. These algorithms, called the approximate message-passing algorithm and the interval-passing algorithm, are suitable to recover sparse signals from a small set of measurements. Depending on the type of measurement matrix (sparse or random) used to acquire the samples of the signal, one or the other reconstruction algorithm can be used. We present the reconstruction results of these two reconstruction algorithms in terms of proportion of correct reconstructions in the noise free case. We also report in this paper possible practical applications of compressed sensing where the choice of the measurement matrix and the reconstruction algorithm are often governed by the constraint of the considered application.
APA, Harvard, Vancouver, ISO, and other styles
5

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Uppman, Hannes. "On Some Combinatorial Optimization Problems : Algorithms and Complexity." Doctoral thesis, Linköpings universitet, Programvara och system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-116859.

Full text
Abstract:
This thesis is about the computational complexity of several classes of combinatorial optimization problems, all related to the constraint satisfaction problems. A constraint language consists of a domain and a set of relations on the domain. For each such language there is a constraint satisfaction problem (CSP). In this problem we are given a set of variables and a collection of constraints, each of which is constraining some variables with a relation in the language. The goal is to determine if domain values can be assigned to the variables in a way that satisfies all constraints. An important question is for which constraint languages the corresponding CSP can be solved in polynomial time. We study this kind of question for optimization problems related to the CSPs. The main focus is on extended minimum cost homomorphism problems. These are optimization versions of CSPs where instances come with an objective function given by a weighted sum of unary cost functions, and where the goal is not only to determine if a solution exists, but to find one of minimum cost. We prove a complete classification of the complexity for these problems on three-element domains. We also obtain a classification for the so-called conservative case. Another class of combinatorial optimization problems are the surjective maximum CSPs. These problems are variants of CSPs where a non-negative weight is attached to each constraint, and the objective is to find a surjective mapping of the variables to values that maximizes the weighted sum of satisfied constraints. The surjectivity requirement causes these problems to behave quite different from for example the minimum cost homomorphism problems, and many powerful techniques are not applicable. We prove a dichotomy for the complexity of the problems in this class on two-element domains. An essential ingredient in the proof is an algorithm that solves a generalized version of the minimum cut problem. This algorithm might be of independent interest. In a final part we study properties of NP-hard optimization problems. This is done with the aid of restricted forms of polynomial-time reductions that for example preserves solvability in sub-exponential time. Two classes of optimization problems similar to those discussed above are considered, and for both we obtain what may be called an easiest NP-hard problem. We also establish some connections to the exponential time hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
7

Smith, Justin N. "Computational complexity, bounded rationality and the theory of games." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lundqvist, Samuel. "Computational algorithms for algebras." Doctoral thesis, Stockholm : Department of Mathematics, Stockholm University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-31552.

Full text
Abstract:
Diss. (sammanfattning) Stockholm : Stockholms universitet, 2009.
At the time of doctoral defence, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript. Paper 5: Manuscript. Paper 6: Manuscript. Härtill 6 uppsatser.
APA, Harvard, Vancouver, ISO, and other styles
9

Neyer, Gabriele. "Algorithms, complexity, and software engineering in computational geometry : case studies /." [S.l.] : [s.n.], 2000. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=13586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brooks, Duncan John. "Adaptive algorithms for low complexity equalizers in mobile communications." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Peng. "Low-complexity iterative detection algorithms for multi-antenna systems." Thesis, University of York, 2011. http://etheses.whiterose.ac.uk/2192/.

Full text
Abstract:
Multiple input multiple output (MIMO) techniques have been widely employed by different wireless systems with many advantages. By using multiple antennas, the system is able to transmit multiple data streams simultaneously and within the same frequency band. The methods known as spatial multiplexing (SM) and spatial diversity (SD) improves the high spectral efficiency and link reliability of wireless communication systems without requiring additional transmitting power. By introducing channel coding in the transmission procedure, the information redundancy is introduced to further improve the reliability of SM links and the quality of service for the next generation communication systems. However, the throughput performance of these systems is limited by interference. A number of different interference suppression techniques have been reported in the literature. Theses techniques can be generally categorised into two aspects: the preprocessing techniques at the transmitter side and the decoding techniques at the receiver side. Generally speaking, in the ideal case, the preprocessing techniques orthogonalize the interfering channels, and therefore, the receiver experiences interference free transmission. However, a feedback channel is required to provide the channel information. On the other hand, in this thesis we are interested in the decoding part which uses various techniques to improve the signal-to-interference-and-noise ratio (SINR) of the desired symbols. To achieve this goal, a number of low-complexity iterative detection algorithms have been investigated. In the context of the thesis, the mainly focus is on the interference cancellation techniques. Firstly, we investigate the traditional successive interference cancellation (SIC) algorithm. SIC has the ability to separate the spatially multiplexed signals on a MIMO channel. However, the low detection diversity order as well as the error propagation effect restrict the bit error performance of such detectors. We propose a multiple feedback SIC (MF-SIC) method to enhance the performance of conventional SIC detection by introducing feedback candidates and reliability checking. This algorithm is able to provide significant performance gains with little additional complexity without the protection from channel codes. The MF-SIC algorithm is then incorporated into an iterative detection and decoding (IDD) scheme to process soft information. Secondly, in the case that the MIMO channel is time-varying, the conventional detection algorithms generally bring about expensive complexity in the time domain. In order to address this problem, a decision feedback algorithm is introduced and adaptive algorithms are derived to update the forward and backward filters to perform the detection in each time instant. A constellation based estimation refinement scheme is also introduced in the system and the performance is significantly improved. The proposed decision feedback algorithm is incorporated into an IDD scheme that performs iterative (turbo) interference cancellation. At last, the inter-cell interference is considered in a multi-cell, high frequency reuse scenario. The distributed iterative detection (DID) algorithms are investigated. A large amount of information need to be transmitted via a wired backhaul network where optimal distributed detection exchange all the soft estimates among adjacent base stations (BSs). To address this problem we consider a reduced message passing (RMP) technique in which each BS generates a detection list with the probabilities for the desired symbol that are sorted according to the calculated probability density. RMP introduces low backhaul overhead compared with the hard bit exchange and outperforms the previously reported hard/soft information exchange algorithms.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Tong. "Low-complexity signal processing algorithms for wireless sensor networks." Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/2844/.

Full text
Abstract:
Recently, wireless sensor networks (WSNs) have attracted a great deal of research interest because of their unique features that allow a wide range of applications in the areas of military, environment, health and home. One of the most important constraints on WSNs is the low power consumption requirement as sensor nodes carry limited, generally irreplaceable, power sources. Therefore, low complexity and high energy efficiency are the most important design characteristics for WSNs. In this thesis, we focus on the development of low complexity signal processing algorithms for the physical layer and cross layer designs for WSNs. For the physical layer design, low-complexity set-membership (SM) channel estimation algorithms for WSNs are investigated. Two matrix-based SM algorithms are developed for the estimation of the complex matrix channel parameters. The main goal is to reduce the computational complexity significantly as compared with existing channel estimators and extend the lifetime of the WSN by reducing its power consumption. For the cross layer design, strategies to jointly design linear receivers and the power allocation parameters for WSNs via an alternating optimization approach are proposed. We firstly consider a two-hop wireless sensor network with multiple relay nodes. Two design criteria are considered: the first one minimizes the mean-square error (MMSE) and the second one maximizes the sum-rate (MSR) of the wireless sensor network. Then, in order to increase the applicability of our investigation, we develop joint strategies for general multihop WSNs. They can be considered as an extension of the strategies proposed for the two-hop WSNs and more complex mathematical derivations are presented. The major advantage is that they are applicable to general multihop WSNs which can provide larger coverage than the two-hop WSNs.
APA, Harvard, Vancouver, ISO, and other styles
13

Borie, Richard Bryan. "Recursively constructed graph families : membership and linear algorithms." Diss., Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/8140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xia, Ge. "Parameterized algorithms and computational lower bounds: a structural approach." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4322.

Full text
Abstract:
Many problems of practical significance are known to be NP-hard, and hence, are unlikely to be solved by polynomial-time algorithms. There are several ways to cope with the NP-hardness of a certain problem. The most popular approaches include heuristic algorithms, approximation algorithms, and randomized algorithms. Recently, parameterized computation and complexity have been receiving a lot of attention. By taking advantage of small or moderate parameter values, parameterized algorithms provide new venues for practically solving problems that are theoretically intractable. In this dissertation, we design efficient parameterized algorithms for several wellknown NP-hard problems and prove strong lower bounds for some others. In doing so, we place emphasis on the development of new techniques that take advantage of the structural properties of the problems. We present a simple parameterized algorithm for Vertex Cover that uses polynomial space and runs in time O(1.2738k + kn). It improves both the previous O(1.286k + kn)-time polynomial-space algorithm by Chen, Kanj, and Jia, and the very recent O(1.2745kk4 + kn)-time exponential-space algorithm, by Chandran and Grandoni. This algorithm stands out for both its performance and its simplicity. Essential to the design of this algorithm are several new techniques that use structural information of the underlying graph to bound the search space. For Vertex Cover on graphs with degree bounded by three, we present a still better algorithm that runs in time O(1.194k + kn), based on an “almost-global” analysis of the search tree. We also show that an important structural property of the underlying graphs – the graph genus – largely dictates the computational complexity of some important graph problems including Vertex Cover, Independent Set and Dominating Set. We present a set of new techniques that allows us to prove almost tight computational lower bounds for some NP-hard problems, such as Clique, Dominating Set, Hitting Set, Set Cover, and Independent Set. The techniques are further extended to derive computational lower bounds on polynomial time approximation schemes for certain NP-hard problems. Our results illustrate a new approach to proving strong computational lower bounds for some NP-hard problems under reasonable conditions.
APA, Harvard, Vancouver, ISO, and other styles
15

Nussbaum, Doron Carleton University Dissertation Computer Science. "Directional separability in two and three dimensional space." Ottawa, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Schüldt, Christian. "Low-Complexity Algorithms for Echo Cancellation in Audio Conferencing Systems." Doctoral thesis, Blekinge Tekniska Högskola, Avdelningen för elektroteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00541.

Full text
Abstract:
Ever since the birth of the telephony system, the problem with echoes, arising from impedance mismatch in 2/4-wire hybrids, or acoustic echoes where a loudspeaker signal is picked up by a closely located microphone, has been ever present. The removal of these echoes is crucial in order to achieve an acceptable audio quality for conversation. Today, the perhaps most common way for echo removal is through cancellation, where an adaptive filter is used to produce an estimated replica of the echo which is then subtracted from the echo-infested signal. Echo cancellation in practice requires extensive control of the filter adaptation process in order to obtain as rapid convergence as possible while also achieving robustness towards disturbances. Moreover, despite the rapid advancement in the computational capabilities of modern digital signal processors there is a constant demand for low-complexity solutions that can be implemented using low power and low cost hardware. This thesis presents low-complexity solutions for echo cancellation related to both the actual filter adaptation process itself as well as for controlling the adaptation process in order to obtain a robust system. Extensive simulations and evaluations using real world recorded signals are used to demonstrate the performance of the proposed solutions.
APA, Harvard, Vancouver, ISO, and other styles
17

Danjean, Ludovic. "Low Complexity Iterative Algorithms in Channel Coding and Compressed Sensing." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/307022.

Full text
Abstract:
Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are notably used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes known to have exceptional error-rate performance under iterative decoding. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This work primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm for compressed sensing. In the first part of this dissertation, we focus on the decoding algorithms for LDPC codes. It is now well known that LDPC codes suffer from an error floor phenomenon in spite of their exceptional performance. This phenomenon originates from the failures of traditional iterative decoders, like belief propagation (BP), on certain low-noise configurations. Recently, a novel class of decoders, called finite alphabet iterative decoders (FAIDs), were proposed with the capability of surpassing BP in the error floor region at a much lower complexity. We show that numerous FAIDs can be designed, and among them only a few will have the ability of surpassing traditional decoders in the error floor region. In this work, we focus on the problem of the selection of good FAIDs for column-weight-three codes over the binary symmetric channel. Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, but the designed decoders do not guarantee good performance for finite-length codes especially in the error floor region. Instead we propose a methodology to identify FAIDs with good error-rate performance in the error floor. This methodology relies on the knowledge of potentially harmful topologies that could be present in a code. The selection method uses the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on a wide range of codes. Moreover first results on column-weight-four codes demonstrate the potential of such decoders on codes which are more used in practice, for example in storage systems. In the second part of this dissertation, we address the area of iterative reconstruction algorithms for compressed sensing. This field has attracted a lot of attention since Donoho's seminal work due to the promise of sampling a sparse signal with less samples than the Nyquist theorem would suggest. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the optimal reconstruction methods which notably use linear programming. In this work, we modify and analyze a low complexity reconstruction algorithm that we refer to as the interval-passing algorithm (IPA) which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex ℓ₁-minimization reconstruction and the very simple verification decoding. The measurement process has also a lower complexity as we use sparse measurement matrices. Comparison with another type of message-passing algorithm, called approximate message-passing, show the IPA can have superior performance with lower complexity. We also demonstrate that the IPA can have practical applications especially in spectroscopy.
APA, Harvard, Vancouver, ISO, and other styles
18

Alnawayseh, Saif Enad Ahmad. "Coded cooperative diversity with low complexity encoding and decoding algorithms." Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42648.

Full text
Abstract:
One of the main concerns in designing the wireless communication systems is to provide sufficiently large data rates while considering the different aspects of the implementation complexity that is often constrained by limited battery power and signal processing capability of the devices. Thus, in this thesis, a low complexity encoding and decoding algorithms are investigated for systems with the transmission diversity, particularly the receiver diversity and the cooperative diversity. Design guidelines for such systems are provided to provide a good trade-off between the implementation complexity and the performance. The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated to reduce the complexity of coded systems. The original order statistics decoding (OSD) is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the OSD is further reduced by assuming a partial ordering of the received bits in order to avoid the highly complex Gauss elimination. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms are studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity. The complexity of the order statistics based list decoding algorithms for linear block codes and binary block turbo codes (BTC) is further reduced by employing highly reliable cyclic redundancy check (CRC) bits. The results show that sending CRC bits for many segments is the most effective tecnhique in reducing the complexity. The coded cooperative diversity is compared with the conventional receiver coded diversity in terms of the pairwise error probability and the overall bit error rate (BER). The expressions for the pairwise error probabilities are obtained analytically and verified by computer simulations. The performance of the cooperative diversity is found to be strongly relay location dependent. Using the analytical as well as extensive numerical results, the geographical areas of the relay locations are obtained for small to medium signal-to-noise ratio values, such that the cooperative coded diversity outperforms the receiver coded diversity. However, for sufficiently large signal-to-noise ratio (SNR) values, or if the path-loss attenuations are not considered, then the receiver coded diversity always outperforms the cooperative coded diversity. The obtained results have important implications on the deployment of the next generation cellular systems supporting the cooperative as well as the receiver diversity.
APA, Harvard, Vancouver, ISO, and other styles
19

He, Qie. "Topics in discrete optimization: models, complexity and algorithms." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50237.

Full text
Abstract:
In this dissertation we examine several discrete optimization problems through the perspectives of modeling, complexity and algorithms. We first provide a probabilistic comparison of split and type 1 triangle cuts for mixed-integer programs with two rows and two integer variables in terms of cut coefficients and volume cutoff. Under a specific probabilistic model of the problem parameters, we show that for the above measure, the probability that a split cut is better than a type 1 triangle cut is higher than the probability that a type 1 triangle cut is better than a split cut. The analysis also suggests some guidelines on when type 1 triangle cuts are likely to be more effective than split cuts and vice versa. We next study a minimum concave cost network flow problem over a grid network. We give a polytime algorithm to solve this problem when the number of echelons is fixed. We show that the problem is NP-hard when the number of echelons is an input parameter. We also extend our result to grid networks with backward and upward arcs. Our result unifies the complexity results for several models in production planning and green recycling including the lot-sizing model, and gives the first polytime algorithm for some problems whose complexities were not known before. Finally, we examine how much complexity randomness will bring to a simple combinatorial optimization problem. We study a problem called the sell or hold problem (SHP). SHP is to sell k out of n indivisible assets over two stages, with known first-stage prices and random second-stage prices, to maximize the total expected revenue. Although the deterministic version of SHP is trivial to solve, we show that SHP is NP-hard when the second-stage prices are realized as a finite set of scenarios. We show that SHP is polynomially solvable when the number of scenarios in the second stage is constant. A max{1/2,k/n}-approximation algorithm is presented for the scenario-based SHP.
APA, Harvard, Vancouver, ISO, and other styles
20

Cai, Fang. "Low-complexity Decoding Algorithms and Architectures for Non-binary LDPC Codes." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1372338108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Valkanova, Elena. "Algorithms for simple stochastic games." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nagao, Atsuki. "Computational Complexity of Tree Evaluation Problems and Branching Program Satisfiability Problems." 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mercat, Alexandre. "Complexity Control for Low-Power HEVC Encoding." Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0035/document.

Full text
Abstract:
L'Internet des objets (loT) est devenu une réalité et ses applications pressenties vont fortement augmenter la demande de vidéo mobile. En conséquence, les systèmes montent en complexité algorithmique et le portage du codage vidéo sur plates-formes embarquées devient problématique. Les nouveaux contenus vidéo 4K et 360°, venant avec des résolutions spatiales (8K, 16K) et temporelles (120 images/seconde élevées compliquent encore le problème. Il est donc nécessaire de réduire l'empreinte des nouveaux codec tels que HEVC tout en préservant les performances en compression et en qualité d'image de ces codecs, La performance énergétique limitée des batteries des systèmes embarqués pousse à proposer de nouvelle méthodes pour ajuster et contrôler la complexité et l'énergie des codecs HEVC. Ce document propose un ensemble d'études dont l'objectif est d'ajuster et de contrôler la complexité et donc la consommation énergétique de l'encodeur HEVC. Deux méthodes de prédiction de découpe de CTU sont proposées : la première basée sur une approche statistique utilisant la variance de l'image et la seconde utilisant l'intelligence artificielle. À partir de cette prédiction, une méthode est proposée pour ajuster la complexité de l'encodage HEVC. Cette solution étend l'espace de recherche autour de la prédiction et alloue la complexité dans l'image afin de minimiser les dégradations en termes de compression et de qualité. Enfin un système de contrôle temps réel de la complexité d'encodage est proposé. Il démontre l'applicabilité de contributions de ce document en maintenant la complexité d'encodage proche d'une consigne
The Internet of Things (loT) is now a reality. Forthcoming applications will boost mobile video demand to an unprecedented level. The induced increase in computational complexity is a challenge when executing in real-time new video coding standards on embedded platforms, limited in computing, memory, and energy. New 4K UHD and 360-degree video contents coming with high spatial (SK, 16K) and temporal (120fp resolutions further complicate the problem. In this context, codecs such as HEVC (High Efficiency Vide Coding) must be worked on to reduce their complexity while preserving the bitrate and image quality. Th bounded energy density of embedded system's batteries requires designers to propose new methods scaling and controlling the complexity and energy consumption of HEVC codecs. This document presents a set of studies aiming at scaling and controlling the complexity, and therefore the energy consumption, of HEVC Intra encoding. Two methods of quad-tree partitioning prediction in "one-shot are proposed: one based on variance-aware statistic approach and one based on Machine Learning using data-mining classifiers. From the obtained prediction, a generic tunable complexity scheme of HEVC encoding is introduced. It expands the search space around the original partitioning prediction and allocates complexit in a frame while minimizing performance loss in terms of bitrate and visual quality. Finally, a real-time contr system is created that dynamically manages the encoding process to keep the encoding complexity under specific tarjet. It demonstrates the a licability of the mayor contributions of this document
APA, Harvard, Vancouver, ISO, and other styles
24

Milliner, David Louis. "Low-complexity list detection algorithms for the multiple-input multiple-output channel." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37248.

Full text
Abstract:
Modern communication systems demand ever-increasing data rates. Meeting this increased demand is not easy due to regulation and fundamental physical constraints. The utilization of more than one antenna at both the transmitter and receiver produces a multiple-input multiple-output (MIMO) channel, thereby enabling (under certain channel conditions) increased data rates without the need for increased bandwidth or transmission power. Concurrent with this increase in bandwidth is an increase in the receiver's computational complexity which, for a brute-force detector, increases exponentially. For receivers that possess error correcting capabilities, the problem of constructing a detector with low computational complexity that allows for near-exact a posteriori detection is challenging for transmission schemes employing even a modest number of transmit antennas and modulation alphabet sizes. The focus of this dissertation is on the construction of MIMO detection algorithms with low and fixed computational complexity. Specifically, the detection algorithms in this dissertation generate a list of potential transmission vectors resulting in realizable communication receivers with low and fixed computational complexity combined with low error rate performance in both coded and uncoded systems. A key contribution in this dissertation is a breadth-first fixed-complexity algorithm known as the smart-ordered and candidate-adding algorithm that achieves a desirable performance-complexity tradeoff. This algorithm requires only a single pass of a search tree to find its list of transmission vectors. We then construct a framework within which we classify a large class of breadth-first detection algorithms. The design of receiver algorithms for MIMO systems employing space-time codes and error correction is an important area of study. In this dissertation we propose a low and fixed computational complexity algorithm for an increasingly significant algebraic space-time code known as the golden code. The notion of computational complexity is critical in the design of practical MIMO receivers. We provide an analysis of computational complexity in relation to list-based soft-output detection where, in some instances, bounds are placed on the computational complexity of MIMO detection. For this analysis we utilize a metric known as the number of branch metric computations. The value at which the log-likelihood ratio (LLR) of conditional probabilities for a transmitted bit being either a 1 or a 0 is 'clipped' has an impact on a system's error rate performance. We propose a new approach for determining LLR clipping levels that, in contrast to prior approaches which clip to a predetermined fixed LLR clipping level, exploits channel state information to improve the error rate performance of suboptimal detection algorithms. Orthogonal frequency-division (OFDM) multiplexing is an effective technique for combating frequency-selective wideband communication channels. It is common practice for MIMO-OFDM detectors to implement the same detector at each subcarrier, in which case the overall performance is dominated by the weakest subcarrier. We propose a hard-output list detection receiver strategy for MIMO-OFDM channels called nonuniform computational complexity allocation, whereby the receiver adapts the computational resources of the MIMO detector at each subcarrier to match a metric of the corresponding channel quality. The proposed nonuniform algorithm improves performance over uniform allocation.
APA, Harvard, Vancouver, ISO, and other styles
25

Powell, David Richard 1973. "Algorithms for sequence alignment." Monash University, School of Computer Science and Software Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Oliveira, Igor Carboni. "Complexidade computacional e o problema P vs NP." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275804.

Full text
Abstract:
Orientador: Arnaldo Vieira Moura
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica
Made available in DSpace on 2018-08-16T09:31:55Z (GMT). No. of bitstreams: 1 Oliveira_IgorCarboni_M.pdf: 1109272 bytes, checksum: 3ab44664e4e0b862409cc8038c431a06 (MD5) Previous issue date: 2010
Resumo: A teoria de complexidade computacional procura estabelecer limites para a eficiência dos algoritmos, investigando a dificuldade inerente dos problemas computacionais. O problema P vs NP é uma questão central em complexidade computacional. Informalmente, ele procura determinar se, para uma classe importante de problemas computacionais, a busca exaustiva por soluções é essencialmente a melhor alternativa algorítmica possível. Esta dissertação oferece tanto uma introdução clássica ao tema, quanto uma exposição a diversos teoremas mais avançados, resultados recentes e problemas em aberto. Em particular, o método da diagonalização é discutido em profundidade. Os principais resultados obtidos por diagonalização são os teoremas de hierarquia de tempo e de espaço (Hartmanis e Stearns [54, 104]). Apresentamos uma generalização desses resultados, obtendo como corolários os teoremas clássicos provados por Hartmanis e Stearns. Essa é a primeira vez que uma prova unificada desses resultados aparece na literatura
Abstract: Computational complexity theory is the field of theoretical computer science that aims to establish limits on the efficiency of algorithms. The main open question in computational complexity is the P vs NP problem. Intuitively, it states that, for several important computational problems, there is no algorithm that performs better than a trivial exhaustive search. We present here an introduction to the subject, followed by more recent and advanced results. In particular, the diagonalization method is discussed in detail. Although it is a classical technique in computational complexity, it is the only method that was able to separate strong complexity classes so far. Some of the most important results in computational complexity theory have been proven by diagonalization. In particular, Hartmanis and Stearns [54, 104] proved that, given more resources, one can solve more computational problems. These results are known as hierarchy theorems. We present a generalization of the deterministic hierarchy theorems, recovering the classical results proved by Hartmanis and Stearns as corollaries. This is the first time that such unified treatment is presented in the literature
Mestrado
Teoria da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Jinghu. "Reduced complexity decoding algorithms for low-density parity check codes and turbo codes." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/6885.

Full text
Abstract:
Iterative decoding techniques have been receiving more and more attentions with the invention of turbo codes and the rediscovery of low-density parity-check (LDPC) codes. An important aspect in the study of iterative decoding is the tradeoff between decoding performance and complexities. For both LDPC codes and turbo codes, optimum decoding algorithms can provide very good performance. However, complicated operations are involved in the optimum decoding, and prohibit the wide applications of LDPC codes and turbo codes in the next generation digital communication and storage systems. Although there exist sub-optimum decoding algorithms for both LDPC codes and turbo codes, the decoding performance is degraded with the sub-optimum algorithms, and under some circumstances, the gap is very large. This research investigates the reduced complexity decoding algorithms of LDPC codes and turbo codes. For decoding LDPC codes, new algorithms, namely the normalized BP-based algorithm and the offset BP-based algorithm, are proposed. For these two reduced complexity algorithms, density evolution algorithms are derived, and are used to determine the best decoder parameters associated with each of the algorithms. Numerical results show that the new algorithms can achieve near optimum decoding performances for infinite code lengths, and simulation results reveal the same conclusion for short to medium code lengths. In addition to the advantage of low computational complexities, the two new algorithms are less subject to quantization errors and correlation effects than the optimum BP algorithm, and consequently are more suitable for hardware implementation. For a special kind of LDPC codes - the geometric LDPC codes, we propose the normalized APP-based algorithm, which is even more simplified yet still can achieve the near optimum performance. For decoding turbo codes, two new sub-optimum decoding algorithms are proposed. The first is the bi-directional soft-output Viterbi algorithm (bi-SOVA), which is based on utilizing a backward SOYA decoding in addition to the conventional forward one, and can achieve better performance than the uni-directional SOYA. The second is the normalized Max-Log-MAP algorithm, which improves the performance of the Max-Log-MAP decoding by scaling the soft outputs with some predetermined factors.
xiii, 117 leaves
APA, Harvard, Vancouver, ISO, and other styles
28

Castura, Jeff. "Performance analysis and optimization of reduced complexity Low Density Parity Check decoding algorithms." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/MQ53426.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Goldar, Davila Alejandro. "Low-complexity algorithms for the fast and safe charge of Li-ion batteries." Doctoral thesis, Universite Libre de Bruxelles, 2021. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/320077.

Full text
Abstract:
This thesis proposes, validates, and compares low-complexity algorithms for the fast-and-safe charge and balance of Li-ion batteries both for the single cell case and for the case of a serially-connected string of battery cells. The proposed algorithms are based on a reduced-order electrochemical model (Equivalent Hydraulic Model, EHM), and make use of constrained-control strategies to limit the main electrochemical degradation phenomena that may accelerate aging, namely: Lithium plating in the anode and solvent oxidation inthe cathode. To avoid the computational intensiveness of solving an online optimization as in the Model Predictive Control (MPC) framework, this thesis proposes the use of Reference Governor schemes. Variants of both the Scalar Reference Governors (SRG) and the Explicit Reference Governors (ERG) are developed to deal with the non-convex admissible region for the charge of a battery cell, while keeping a low computational burden. To evaluate the performance of the proposed techniques for the single cell case, they are experimentallyvalidated on commercial Turnigy LCO cells of 160 mAh at four different constant temperatures (10, 20, 30 and 40 °C). In the second part of this thesis, the proposed charging strategies are extended to take into account the balance of a serially-connected string of cells. To equalize possible mismatches, a centralized policy based on a shunting grid (active balance) connects or disconnects the cells during the charge. After a preliminary analysis, a simple mixed-integer algorithm was proposed. Since this method is computationally inefficient due to the high number of scenarios to be evaluated, this thesis proposes a ratio-based algorithm based on a Pulse-Width Modulation (PWM) approach. This approach can be used within both MPC and RG schemes. The numerical validations of the proposed algorithms for the case of a string of four battery cells are carried out using a simulator based on a full-order electrochemical model. Numerical validations show that the PWM-like approach charges in parallel all the cells within the pack, whereas the mixed-integer approach charges the battery cells sequentially from the battery cell with the lowest state of charge to the ones with the highest states of charge. On the basis of the simulations, an algorithm based on a mixed logic that allows to charge in a “sequential parallel” approach is proposed. Some conclusions and future directions of research are proposed at the end of the thesis.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
30

Golovins, Eugene. "Low-Complexity Algorithms for Channel Estimation in Optimised Pilot-Assisted Wireless OFDM Systems." Doctoral thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/5213.

Full text
Abstract:
Orthogonal frequency division multiplexing (OFDM) has recently become a dominant transmission technology considered for the next generation fixed and mobile broadband wireless communication systems. OFDM has an advantage of lessening the severe effects of the frequency-selective (multipath) fading due to the band splitting into relatively flat fading subchannels, and allows for low-complexity transceiver implementation based on the fast Fourier transform algorithms. Combining OFDM modulation with multilevel frequency-domain symbol mapping (e.g., QAM) and spatial multiplexing (SM) over the multiple-input multiple-output (MIMO) channels, can theoretically achieve near Shannon capacity of the communication link. However, the high-rate and spectrumefficient system implementation requires coherent detection at the receiving end that is possible only when accurate channel state information (CSI) is available. Since in practice, the response of the wireless channel is unknown and is subject to random variation with time, the receiver typically employs a channel estimator for CSI acquisition. The channel response information retrieved by the estimator is then used by the data detector and can also be fed back to the transmitter by means of in-band or out-of-band signalling, so the latter could adapt power loading, modulation and coding parameters according to the channel conditions. Thus, design of an accurate and robust channel estimator is a crucial requirement for reliable communication through the channel, which is selective in time and frequency. In a MIMO configuration, a separate channel estimator has to be associated with each transmit/receive antenna pair, making the estimation algorithm complexity a primary concern. Pilot-assisted methods, relying on the insertion of reference symbols in certain frequencies and time slots, have been found attractive for identification of the doubly-selective radio channels from both the complexity and performance standpoint. In this dissertation, a family of the reduced-complexity estimators for the single and multiple-antenna OFDM systems is developed. The estimators are based on the transform-domain processing and have the same order of computational complexity, irrespective of the number of pilot subcarriers and their positioning. The common estimator structure represents a cascade of successive small-dimension filtering modules. The number of modules, as well as their order inside the cascade, is determined by the class of the estimator (one or two-dimensional) and availability of the channel statistics (correlation and signal-to-noise power ratio). For fine precision estimation in the multipath channels with statistics not known a priori, we propose recursive design of the filtering modules. Simulation results show that in the steady state, performance of the recursive estimators approaches that of their theoretical counterparts, which are optimal in the minimum mean square error (MMSE) sense. In contrast to the majority of the channel estimators developed so far, our modular-type architectures are suitable for the reconfigurable OFDM transceivers where the actual channel conditions influence the decision of what class of filtering algorithm to use, and how to allot pilot subcarrier positions in the band. In the pilot-assisted transmissions, channel estimation and detection are performed separately from each other over the distinct subcarrier sets. The estimator output is used only to construct the detector transform, but not as the detector input. Since performance of both channel estimation and detection depends on the signal-to-noise power vi ratio (SNR) at the corresponding subcarriers, there is a dilemma of the optimal power allocation between the data and the pilot symbols as these are conflicting requirements under the total transmit power constraint. The problem is exacerbated by the variety of channel estimators. Each kind of estimation algorithm is characterised by its own SNR gain, which in general can vary depending on the channel correlation. In this dissertation, we optimise pilot-data power allocation for the case of developed low-complexity one and two-dimensional MMSE channel estimators. The resultant contribution is manifested by the closed-form analytical expressions of the upper bound (suboptimal approximate value) on the optimal pilot-to-data power ratio (PDR) as a function of a number of design parameters (number of subcarriers, number of pilots, number of transmit antennas, effective order of the channel model, maximum Doppler shift, SNR, etc.). The resultant PDR equations can be applied to the MIMO-OFDM systems with arbitrary arrangement of the pilot subcarriers, operating in an arbitrary multipath fading channel. These properties and relatively simple functional representation of the derived analytical PDR expressions are designated to alleviate the challenging task of on-the-fly optimisation of the adaptive SM-MIMO-OFDM system, which is capable of adjusting transmit signal configuration (e.g., block length, number of pilot subcarriers or antennas) according to the established channel conditions.
APA, Harvard, Vancouver, ISO, and other styles
31

Sinnokrot, Mohanned Omar. "Space-time block codes with low maximum-likelihood decoding complexity." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31752.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Barry, John; Committee Co-Chair: Madisetti, Vijay; Committee Member: Andrew, Alfred; Committee Member: Li, Ye; Committee Member: Ma, Xiaoli; Committee Member: Stuber, Gordon. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
32

Islam, Mohammad Tauhidul, and University of Lethbridge Faculty of Arts and Science. "Approximation algorithms for minimum knapsack problem." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2009, 2009. http://hdl.handle.net/10133/1304.

Full text
Abstract:
Knapsack problem has been widely studied in computer science for years. There exist several variants of the problem, with zero-one maximum knapsack in one dimension being the simplest one. In this thesis we study several existing approximation algorithms for the minimization version of the problem and propose a scaling based fully polynomial time approximation scheme for the minimum knapsack problem. We compare the performance of this algorithm with existing algorithms. Our experiments show that, the proposed algorithm runs fast and has a good performance ratio in practice. We also conduct extensive experiments on the data provided by Canadian Pacific Logistics Solutions during the MITACS internship program. We propose a scaling based e-approximation scheme for the multidimensional (d-dimensional) minimum knapsack problem and compare its performance with a generalization of a greedy algorithm for minimum knapsack in d dimensions. Our experiments show that the e- approximation scheme exhibits good performance ratio in practice.
x, 85 leaves ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
33

Liang, Ying, and 梁瑩. "A study on low complexity near-maximum likelihood spherical MIMO decoders." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B43703987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Liang, Ying. "A study on low complexity near-maximum likelihood spherical MIMO decoders." Click to view the E-thesis via HKUTO, 2010. http://sunzi.lib.hku.hk/hkuto/record/B43703987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Krug, Marcus [Verfasser], and D. [Akademischer Betreuer] Wagner. "Combinatorial and Geometric Aspects of Computational Network Construction : Algorithms and Complexity / Marcus Krug. Betreuer: D. Wagner." Karlsruhe : KIT-Bibliothek, 2012. http://d-nb.info/1019361921/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vance, Pamela H. "Crew scheduling, cutting stock, and column generation : Solving huge integer programs." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/23333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Carlsson, Per. "Algorithms for Electronic Power Markets." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kim, Eun Jung. "Parameterized algorithms on digraph and constraint satisfaction problems." Thesis, Royal Holloway, University of London, 2010. http://repository.royalholloway.ac.uk/items/4e3a1971-6e98-97a9-8e4f-9e1fdc76066a/9/.

Full text
Abstract:
While polynomial-time approximation algorithms remain a dominant notion in tackling computationally hard problems, the framework of parameterized complexity has been emerging rapidly in recent years. Roughly speaking, the analytic framework of parameterized complexity attempts to grasp the difference between problems which admit O(c^k . poly(n))-time algorithms such as Vertex Cover, and problems like Dominating Set for which essentially brute-force O(n^k)-algorithms are best possible until now. Problems of the former type is said to be fixed-parameter tractable (FPT) and those of the latter type are regarded intractable. In this thesis, we investigate some problems on directed graphs and a number of constraint satisfaction problems (CSPs) from the parameterized perspective. We develop fixed-parameter algorithms for some digraph problems. In particular, we focus on the basic problem of finding a tree with certain property embedded in a given digraph. New or improved fpt-algorthms are presented for finding an out-branching with many or few leaves (Directed Maximum Leaf, Directed Minimum Leaf problems). For acyclic digraphs, Directed Maximum Leaf is shown to allow a kernel with linear number of vertices. We suggest a kernel for Directed Minimum Leaf with quadratic number of vertices. An improved fpt-algorithm for finding k-Out-Tree is presented and this algorithm is incorporated as a subroutine to obtain a better algorithm for Directed Minimum Leaf. In the second part of this thesis, we concentrate on several CSPs in which we want to maximize the number of satisfied constraints and consider parameterization "above tight lower bound" for these problems. To deal with this type of parameterization, we present a new method called SABEM using probabilistic approach and applying harmonic analysis on pseudo-boolean functions. Using SABEM we show that a number of CSPs admit polynomial kernels, thus being fixed-parameter tractable. Moreover, we suggest some problem-specific combinatorial approaches to Max-2-Sat and a wide special class of Max-Lin2, which lead to a kernel of smaller size than what can be obtained using SABEM for respective problems.
APA, Harvard, Vancouver, ISO, and other styles
39

Ringh, Emil. "Low complexity algorithms for faster-than-Nyquistsign : Using coding to avoid an NP-hard problem." Thesis, KTH, Optimeringslära och systemteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-136936.

Full text
Abstract:
This thesis is an investigation of what happens when communication links are pushed towards their limits and the data-bearing-pulses are packed tighter in time than previously done. This is called faster-than-Nyquist (FTN) signaling and it will violate the Nyquist inter-symbol interference criterion, implying that the data-pulsesare no longer orthogonal and thus that the samples at the receiver will be dependent on more than one of the transmitted symbols. Inter-symbol interference (ISI) has occurred and the consequences of it are studied for the AWGN-channel model. Here it is shown that in order to do maximum likelihood estimation on these samples the receiver will face an NP-hard problem. The standard algorithm to make good estimations in the ISI case is the Viterbi algorithm, but applied on a block with N bits and interference among K bits thecomplexity is O(N *2K), hence limiting the practical applicability. Here, a precoding scheme is proposed together with a decoding that reduce the estimation complexity. By applying the proposed precoding/decoding to a data block of length N the estimation can be done in O(N2) operations preceded by a single off-line O(N3) calculation. The precoding itself is also done in O(N2)operations, with a single o ff-line operation of O(N3) complexity. The strength of the precoding is shown in simulations. In the first it was tested together with turbo codes of code rate 2/3 and block lengthof 6000 bits. When sending 25% more data (FTN) the non-precoded case needed about 2.5 dB higher signal-to-noise ratio (SNR) to have the same error rate as the precoded case. When the precoded case performed without any block errors, the non-precoded case still had a block error rate almost equal to 1. We also studied the scenario of transmission with low latency and high reliability. Here, 600 bits were transmitted with a code rate of 2/3, and hence the target was to communicate 400 bits of data. Applying FTN with doublepacking, that is transmitting 1200 bits during the same amount of time, it was possible to lower the code rate to 1/3 since only 400 bits of data was to be communicated. This technique greatly improves the robustness. When the FTN case performed error free, the classical Nyquist case still had a block error rate of 0.19. To reach error free performance the Nyquist case needed 1.25 dB higher SNR compared to the precoded FTN case with lower code rate.
Detta examensarbete handlar om vad som händer då kommunikationskanaler pressas till sin gräns och pulserna som bär data packas tätare i tiden. Detta kallas snabbare-än-Nyquist (FTN) och kommer att bryta mot Nyquists kriterium för intersymbolinterferens, vilket innebär att de databärande pulserna inte längre kommer vara ortogonala och att signalsamplen kommer vara beroende av mer än en skickad symbol. Det uppstår intersymbolinterferens (ISI) och dess konsekvenser studeras inom kanalmodellen AWGN. Vi visar att göra en maximum likelihood uppskattning baserat på dessa data är ett NP-svårt problem. Normalt används Viterbi algoritmen när man har ISI, men den har exponentiell komplexitet. På ett block med N symboler och interferens i storleken K symboler är komplexiteten O(N*2K) vilket gör att algoritmen är svår att använda i praktiska fall. Istället så föreslås en förkodning, som tillsammans med en avkodning reducerar komplexiteten. Kodningen appliceras blockvis och på ett block med N symboler är komplexiteten O(N2) för kodning/avkodning. Denna måste i båda fall föregås av en O(N3) beräkning, som dock behöver göras endast en gång.  Simuleringar visar den föreslagna kodningens fördelar. I den första simuleringen testades den ihop med turbokodning med blocklängd på 6000 bitar och en kodningsgrad på 2/3. När FTN användes för att skicka 25% mer data krävdes det cirka 2.5 dB högre signal-till-brus-förhållande (SNR) för att den icke förkodade signalen skulle ha samma felfrekvens som den förkodade. När det förkodade fallet presterade felfritt gjorde det oförkodade fel på nästan alla block.  Ett annat scenario som testades var det med korta koder, liten fördröjning och hög robusthet. I detta scenario skickades 600 bitar med en kodningsgrad på 2/3, alltså 400 bitar ren data. Genom att använda FTN med en dubbel packningsgrad, vilket innebär att 1200 bitar skickades under samma tid, var det möjligt att sänka kodningsgraden till 1/3, eftersom det bara var 400 bitar ren data som skulle överföras. Detta ökad robustheten i systemet ty då FTN fallet gjorde felfritt hade det klassiska Nyquist fallet fortfarande en felfrekvens på 0.19 för sina block. Det krävdes 1.25 dB högre SNR för Nyquist fallet att bli felfritt jämfört med FTN och lägre kodningsgrad.
APA, Harvard, Vancouver, ISO, and other styles
40

Prasad, Kintali Shiva. "Realizable paths and the NL vs L problem." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42770.

Full text
Abstract:
A celebrated theorem of Savitch [Savitch'70] states that NSPACE(S) is contained in DSPACE(S²). In particular, Savitch gave a deterministic algorithm to solve ST-Connectivity (an NL-complete problem) using O({log}²{n}) space, implying NL (non-deterministic logspace) is contained in DSPACE({log}²{n}). While Savitch's theorem itself has not been improved in the last four decades, several graph connectivity problems are shown to lie between L and NL, providing new insights into the space-bounded complexity classes. All the connectivity problems considered in the literature so far are essentially special cases of ST-Connectivity. In this dissertation, we initiate the study of auxiliary PDAs as graph connectivity problems and define sixteen different "graph realizability problems" and study their relationships. The complexity of these connectivity problems lie between L (logspace) and P (polynomial time). ST-Realizability, the most general graph realizability problem is P-complete. 1DSTREAL(poly), the most specific graph realizability problem is L-complete. As special cases of our graph realizability problems we define two natural problems, Balanced ST-Connectivity and Positive Balanced ST-Connectivity, that lie between L and NL. We study the space complexity of SGSLOGCFL, a graph realizability problem lying between L and LOGCFL. We define generalizations of graph squaring and transitive closure, present efficient parallel algorithms for SGSLOGCFL and use the techniques of Trifonov to show that SGSLOGCFL is contained in DSPACE(lognloglogn). This implies that Balanced ST-Connectivity is contained in DSPACE(lognloglogn). We conclude with several interesting new research directions.
APA, Harvard, Vancouver, ISO, and other styles
41

Xiang, Gang. "Fast algorithms for computing statistics under interval uncertainty with applications to computer science and to electrical and computer engineering /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Starrett, Dean. "Optimal Alignment of Multiple Sequence Alignments." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194840.

Full text
Abstract:
An essential tool in biology is the alignment of multiple sequences. Biologists use multiple sequence alignments for tasks such as predicting protein structure and function, reconstructing phylogenetic trees, and finding motifs. Constructing high-quality multiple alignments is computationally hard, both in theory and in practice, and is typically done using heuristic methods. The majority of state-of-the-art multiple alignment programs employ a form and polish strategy, where in the construction phase, an initial multiple alignment is formed by progressively merging smaller alignments, starting with single sequences. Then in a local-search phase, the resulting alignment is polished by repeatedly splitting it into smaller alignments and re-merging. This merging of alignments, the basic computational problem in the construction and local-search phases of the best multiple alignment heuristics, is called the Aligning Alignments Problem. Under the sum-of-pairs objective for scoring multiple alignments, this problem may seem to be a simple extension of two-sequence alignment. It is proven here, however, that with affine gap costs (which are recognized as necessary to get biologically-informative alignments) the problem is NP-complete when gaps are counted exactly. Interestingly, this form of multiple alignment is polynomial-time solvable when we relax the exact count, showing that exact gap counts themselves are inherently hard in multiple sequence alignment. Unlike general multiple alignment however, we show that Aligning Alignments with affine gap costs and exact counts is tractable in practice, by demonstrating an effective algorithm and a fast implementation. Our software AlignAlign is both time- and space-efficient on biological data. Computational experiments on biological data show instances derived from standard benchmark suites can be optimally aligned with surprising efficiency, and experiments on simulated data show the time and space both scale well.
APA, Harvard, Vancouver, ISO, and other styles
43

Lichtenstein, Joseph. "Low computational complexity bit error rate simulation for personal communications systems in multipath and fading environments." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06102009-063138/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ji, Bo. "Design of Efficient Resource Allocation Algorithms for Wireless Networks: High Throughput, Small Delay, and Low Complexity." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354641556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Mattsson, Per. "The Asymmetric Traveling Salesman Problem." Thesis, Uppsala universitet, Matematiska institutionen, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-132624.

Full text
Abstract:
This thesis is a survey on the approximability of the asymmetric traveling salesmanproblem with triangle inequality (ATSP).In the ATSP we are given a set of cities and a function that gives the cost of travelingbetween any pair of cities. The cost function must satisfy the triangle inequality, i.e.the cost of traveling from city A to city B cannot be larger than the cost of travelingfrom A to some other city C and then to B. However, we allow the cost function tobe asymmetric, i.e. the cost of traveling from city A to city B may not equal the costof traveling from B to A. The problem is then to find the cheapest tour that visit eachcity exactly once. This problem is NP-hard, and thus we are mainly interested in approximationalgorithms. We study the repeated cycle cover heuristic by Frieze et al. We alsostudy the Held-Karp heuristic, including the recent result by Asadpour et al. that givesa new upper bound on the integrality gap. Finally we present the result ofPapadimitriou and Vempala which shows that it is NP-hard to approximate the ATSP with a ratio better than 117/116.
APA, Harvard, Vancouver, ISO, and other styles
46

Su, Chorng-Yann, and 蘇崇彥. "The Study of Low Computational Complexity Enhanced Zerotree Coding Algorithm." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/18205327675769242225.

Full text
Abstract:
博士
國立交通大學
電機與控制工程系
88
In this study, we present a low computational complexity enhanced zerotree coding algorithm and apply it to the coding of arbitrarily shaped images. The algorithm contains three main parts. The first part is about transformation, including extending the shape-adaptive discrete wavelet transform (SA-DWT) to translation-invariant SA-DWT (TI-SA-DWT), and presenting a fast convolution algorithm for the filtering with odd length symmetric filters. The second part is about compression performance, including proposing an computationally simple adaptive multi-subband decomposition (AMSD) to elevate the peak-signal to noise ratio (PSNR), and designing a band flag scheme (BFS) to speed up the execution of zerotree coding. The third part is about low memory implementation, including using recursive programming and using the top bits of coefficients to serve as flags so that the working memory can be reduced significantly and the cost of hardware implementation can be decreased. Experimental results show that in transform time, using the fast convolution algorithm can reduce at least 13% of DWT time and 37% of inverse DWT time; in PSNR value, using AMSD can further elevate the value by 0.1~4dB; in the number of comparisons for searching zerotrees, using BFS can reduce the number of comparisons up to 60% of the original one; in memory requirement, for a 768*512 color image, using the proposed method can reduce the amount of memory up to 5.3 MBytes compared with the state-of-the-art zerotree coder, set partitioning in hierarchical trees (SPIHT). Experimental results verify that the proposed algorithm is indeed executable and outperforms the other related techniques. Therefore, it can be extensively applied to the coding of various images.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Cheng, and 林成. "Low Computational Complexity Algorithm forH.264/AVC Video Frame-skipping Transcoding." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61705566894631845556.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所
100
H.264/AVC supports many advanced compression techniques that can achieve better coding performance than the previous standards, and applications of H.264/AVC become popular and important in network multimedia services. However, because different networks might have different limitations of channel bandwidth, it is often needed to reduce the bit-rate of the coded video bit streams. Frame-skipping transcoding is one way to solve the problem. The most straightforward method for frame-skipping transcoding is to decode the video stream and re-encode the reconstructed video sequences after frame-skipping. Because H.264/AVC supports many advanced compression techniques, the computational complexity of the fully decoding and re-encoding process is extremely high. If we prefer to reduce the computational complexity, reusing information existing in the original incoming video stream can achieve this goal. But the coding performance will decrease significantly. Therefore, we propose a hierarchical mode decision, motion vector composition and multiple reference frames selection, and then combine with them for frame-skipping transcoding. The experimental results show that our proposed algorithms can maintain the coding performance and speed up transcoding process simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
48

Huang, Shang-en, and 黃上恩. "Low Computational Complexity Algorithm for HEVC Intra Prediction with Support Vector Machine." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/09461317827186011784.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
103
High efficiency video coding (HEVC) is the state-of-the-art video coding standard. HEVC is better than the previous video coding standard H.264/AVC on the coding efficient. The coding unit(CU) of HEVC uses a Quadtree-based coding structure to increase coding performance. In order to predict more accurately, using 35 prediction modes in intra prediction. Simultaneously, it also increases its coding complexity. Hence, in this thesis, we proposed an SVM based fast intra CU depth decision algorithm is proposed to reduce the computational complexity. It is convenient to develop the criterion of early CU splitting and termination by applying SVM with features extracted from spatial domain, frequency domain, including Variance, low-frequency AC value, and neighboring CU depth. After extracting the features, SVM predict the current CU will be split or be terminated. In intra prediction, we use sobel operator to distinguish the direction of PU(Prediction Unit) and the neighbor mode to be the candidates of rough mode decision and we also reduce the candidates of rate-distortion optimization. The experimental results show that our proposed algorithm achieves 48.3% encoding time saving on average without significant degradation of coding performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Gorur, Pushkar. "Bitrate Reduction Techniques for Low-Complexity Surveillance Video Coding." Thesis, 2016. http://etd.iisc.ernet.in/handle/2005/2681.

Full text
Abstract:
High resolution surveillance video cameras are invaluable resources for effective crime prevention and forensic investigations. However, increasing communication bandwidth requirements of high definition surveillance videos are severely limiting the number of cameras that can be deployed. Higher bitrate also increases operating expenses due to higher data communication and storage costs. Hence, it is essential to develop low complexity algorithms which reduce data rate of the compressed video stream without affecting the image fidelity. In this thesis, a computer vision aided H.264 surveillance video encoder and four associated algorithms are proposed to reduce the bitrate. The proposed techniques are (I) Speeded up foreground segmentation, (II) Skip decision, (III) Reference frame selection and (IV) Face Region-of-Interest (ROI) coding. In the first part of the thesis, a modification to the adaptive Gaussian Mixture Model (GMM) based foreground segmentation algorithm is proposed to reduce computational complexity. This is achieved by replacing expensive floating point computations with low cost integer operations. To maintain accuracy, we compute periodic floating point updates for the GMM weight parameter using the value of an integer counter. Experiments show speedups in the range of 1.33 - 1.44 on standard video datasets where a large fraction of pixels are multimodal. In the second part, we propose a skip decision technique that uses a spatial sampler to sample pixels. The sampled pixels are segmented using the speeded up GMM algorithm. The storage pattern of the GMM parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. In the third part, a reference frame selection algorithm is proposed to maximize the number of background Macroblocks (MB’s) (i.e. MB’s that contain background image content) in the Decoded Picture Buffer. This reduces the cost of coding uncovered background regions. Distortion over foreground pixels is measured to quantify the performance of skip decision and reference frame selection techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence. In the final part of the thesis, face and shadow region detection is combined with the skip decision algorithm to perform ROI coding for pedestrian surveillance videos. Since person identification requires high quality face images, MB’s containing face image content are encoded with a low Quantization Parameter setting (i.e. high quality). Other regions of the body in the image are considered as RORI (Regions of reduced interest) and are encoded at low quality. The shadow regions are marked as Skip. Techniques that use only facial features to detect faces (e.g. Viola Jones face detector) are not robust in real world scenarios. Hence, we propose to initially detect pedestrians using deformable part models. The face region is determined using the deformed part locations. Detected pedestrians are tracked using an optical flow based tracker combined with a Kalman filter. The tracker improves the accuracy and also avoids the need to run the object detector on already detected pedestrians. Shadow and skin detector scores are computed over super pixels. Bilattice based logic inference is used to combine multiple likelihood scores and classify the super pixels as ROI, RORI or RONI. The coding mode and QP values of the MB’s are determined using the super pixel labels. The proposed techniques provide a further reduction in bitrate of up to 50.2%.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Kai-Hsiung, and 張凱雄. "A Low Computational-Complexity QRS Complex Detection Algorithm Realized in an MCU-Based System and Tested with a Three-Lead Synthetic ECG Generator." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/17473511076822651106.

Full text
Abstract:
博士
國立成功大學
電機工程學系碩博士班
97
A normal electrocardiogram (ECG) cycle basically consists of the P-wave, the QRS complex and the T-wave, which reflect the different electrical activities within the heart. An electrocardiogram is a non-invasive tool that is used for recording the electrical activity within the heart, and from the observation of ECG morphology, doctors are able to diagnose different types of heart disease. Therefore, this paper first presents the design of an electrocardiogram measurement circuit module, which consists of the instrumentation amplifier, the Sallen-Key high-pass filter, the Sallen-Key low-pass filter, and the back stage amplifier. The ECG measurement circuit module is designed with the PSpice circuit simulation CAD to measure three lead electrocardiograms of Einthoven's Triangle and filter the noise within the measured waveforms, and then uses the operational amplifier to implement the circuits. Afterward, this paper simplifies and adopts a synthetic electrocardiogram model composed of three coupled differential equations proposed by Dr. McSharry et al. (2003), as well as utilizes an improved fourth-order Taylor series to quickly approximate the exponential function within the differential equation of a simplified synthetic ECG model so that, when studying ECG analytic algorithms, the graphical user interface (GUI) program “Synthetic Waveforms Control Panel”, developed by the author using LabVIEW, can be used to attain the desirable morphology of electrocardiogram as well as the time intervals between peaks (P, Q, R, S, T) and heart rate. Finally, this paper presents a low computational-complexity QRS complex detection algorithm, which can easily be implemented in a general processor or micro-controller. The algorithm adopts the derivative of the electrocardiogram waveform to accurately identify the peaks (the Q-, R-, and S-peaks) and calculate the heart rate and the duration of the QRS complex in real-time. Then we apply the QRS complex detection algorithm to design an MCU-based ECG measurement instrument, and test it with 12 electrocardiogram samples (with heart rates from 49.59 to 155.44 BPM). The experimental results show that the maximum relative error of the measured R-R interval with the new QRS complex detection algorithm is only 1.58%, while the more detailed experimental data will be discussed at the end of this paper. The low computational-complexity QRS complex detection algorithm and the 8-bit MCU-based ECG measurement system developed in this work has good potential for use in portable and reasonable measurement instruments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography