Rozprawy doktorskie na temat „Low-complexity”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Low-complexity.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Low-complexity”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Cooper, D. "Classes of low complexity". Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Williams, Richard G. C. "Low complexity block coded modulation". Thesis, University of Manchester, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329600.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Khattak, Shadan. "Low complexity multiview video coding". Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10511.

Pełny tekst źródła
Streszczenie:
3D video is a technology that has seen a tremendous attention in the recent years. Multiview Video Coding (MVC) is an extension of the popular H.264 video coding standard and is commonly used to compress 3D videos. It offers an improvement of 20% to 50% in compression efficiency over simulcast encoding of multiview videos using the conventional H.264 video coding standard. However, there are two important problems associated with it: (i) its superior compression performance comes at the cost of significantly higher computational complexity which hampers the real-world realization of MVC encoder in applications such as 3D live broadcasting and interactive Free Viewpoint Television (FTV), and (ii) compressed 3D videos can suffer from packet loss during transmission, which can degrade the viewing quality of the 3D video at the decoder. This thesis aims to solve these problems by presenting techniques to reduce the computational complexity of the MVC encoder and by proposing a consistent error concealment technique for frame losses in 3D video transmission. The thesis first analyses the complexity of the MVC encoder. It then proposes two novel techniques to reduce the complexity of motion and disparity estimation. The first method achieves complexity reduction in the disparity estimation process by exploiting the relationship between temporal levels, type of macroblocks and search ranges while the second method achieves it by exploiting the geometrical relation- ship between motion and disparity vectors in stereo frames. These two methods are then combined with other state-of-the-art methods in a unique framework where gains add up. Experimental results show that the proposed low-complexity framework can reduce the encoding time of the standard MVC encoder by over 93% while maintaining similar compression efficiency performance. The addition of new View Synthesis Prediction (VSP) modes to the MVC encoding framework improves the compression efficiency of MVC. However, testing additional modes comes at the cost of increased encoding complexity. In order to reduce the encoding complexity, the thesis, next, proposes a bayesian early mode decision technique for a VSP enhanced MVC coder. It exploits the statistical similarities between the RD costs of the VSP SKIP mode in neighbouring views to terminate the mode decision process early. Results indicate that the proposed technique can reduce the encoding time of the enhanced MVC coder by over 33% at similar compression efficiency levels. Finally, compressed 3D videos are usually required to be broadcast to a large number of users where transmission errors can lead to frame losses which can degrade the video quality at the decoder. A simple reconstruction of the lost frames can lead to inconsistent reconstruction of the 3D scene which may negatively affect the viewing experience of a user. In order to solve this problem, the thesis proposes, at the end, a consistency model for recovering frames lost during transmission. The proposed consistency model is used to evaluate inter-view and temporal consistencies while selecting candidate blocks for concealment. Experimental results show that the proposed technique is able to recover the lost frames with high consistency and better quality than two standard error concealment methods and a baseline technique based on the boundary matching algorithm.
Style APA, Harvard, Vancouver, ISO itp.
4

Johansson, Kenny. "Low Complexity and Low Power Bit-Serial Multipliers". Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1751.

Pełny tekst źródła
Streszczenie:

Bit-serial multiplication with a fixed coefficient is commonly used in integrated circuits, such as digital filters and FFTs. These multiplications can be implemented using basic components such as adders, subtractors and D flip-flops. Multiplication with the same coefficient can be implemented in many ways, using different structures. Other studies in this area have focused on how to minimize the number of adders/subtractors, and often assumed that the cost for D flip-flops is neglectable. That simplification has been proved to be far too great, and further not at all necessary. In digital devices low power consumption is always desirable. How to attain this in bit-serial multipliers is a complex problem.

The aim of this thesis was to find a strategy on how to implement bit-serial multipliers with as low cost as possible. An important step was achieved by deriving formulas that can be used to calculate the carry switch probability in the adders/subtractors. It has also been established that it is possible to design a power model that can be applied to all possible structures of bit- serial multipliers.

Style APA, Harvard, Vancouver, ISO itp.
5

McKay, Thomas Gregory. "Soft, low complexity dielectric elastomer generators". Thesis, University of Auckland, 2010. http://hdl.handle.net/2292/7661.

Pełny tekst źródła
Streszczenie:
The global demand for renewable energy is forecast to grow rapidly over the next 25 years [1]. A class of variable capacitor power generators called Dielectric Elastomer Generators (DEG), show considerable promise for harvesting from previously unviable sources. This is because they can be directly coupled to large broadband motions without gearing, while maintaining a high energy density, and they have few moving parts, and are highly flexible. At the system level DEG cannot currently realize their full potential for flexibility, simplicity and low mass because they require rigid and bulky external circuitry and often need a secondary high voltage energy source; there is a lack of modelling tools for system level optimization; and there is a lack of experimental data from explicitly described DEG systems. The objectives of this thesis are to provide solutions to these issues. First, system level DEG models with the dynamic electrical systems were developed and experimentally validated. Then the generalized Self-Priming Circuit (SPC) was introduced, an external circuit that enables DEG to operate in a wide range of conditions without a secondary high voltage energy source. These systems do not require an external high voltage energy source because they have the ability to boost their voltage from consumer battery levels up to kilovolt levels by accumulation of generated energy. Generalized SPC were then optimized for voltage boosting and energy production using the system level models. Finally, integrated self-priming systems were introduced. The SPC in these systems was integrated onto the DEG membrane using a flexible electronics technology called Dielectric Elastomer Switches (DES), so that no rigid and bulky external electronics were required. Highly explicit experimental analysis of each system was carried out. These contributions resulted in the publication of three journal articles, two provisional patents, and three conference articles. This work presents the opportunity to produce DEG systems with greater autonomy, softness, simplicity, energy density, and lower cost than previously. With further developments of DES, more complex systems with these benefits will be possible.
Style APA, Harvard, Vancouver, ISO itp.
6

Gustafsson, Oscar. "Contributions to low-complexity digital filters /". Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek837s.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Harsha, Prahladh 1976. "Small PCPs with low query complexity". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86448.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Ho, Hai Pang. "Low complexity decoding of cyclic codes". Thesis, University of Surrey, 1998. http://epubs.surrey.ac.uk/844222/.

Pełny tekst źródła
Streszczenie:
This thesis presents three novel low complexity decoding algorithms for Cyclic codes. These algorithms are the Extended Kasami Algorithm (EKA), Permutation Error Trapping (PET) and the Modified Dorsch Algorithm (MDA). The Extended Kasami Algorithm is a novel decoding algorithm combining the Error Trapping Algorithm with cover polynomial techniques. With a revised searching method to locate the best combination of cover positions, the Extended Kasami Algorithm can achieve bounded distance performance with complexity many times lower than other efficient decoding algorithms. In comparison with the Minimum Weight Decoding (MWD) Algorithm on (31,16) BCH codes, the complexity of EKA is only 5% of MWD at 0 dB Eb/No. Comparing EKA with the Kasami Algorithm on the (23,12) Golay code, EKA reduces the complexity consistently for all values of Eb/No. When dealing with Reed Solomon codes, it is found that the additional complexity incurred by finding the error values is a function that increases exponentially with the number of bits in each symbol. To eliminate the problem of finding the error values, Permutation Error Trapping uses a specific cyclic code property to re-shuffle symbol positions. This complements well the Error Trapping approach and most decodable error patterns can be trapped by using this simple approach. PET achieves performance close to that of MWD on the (15,9) RS code with much lower complexity. For more complex codes, like the four-symbol-error correcting (15,7) RS code. Modified Permutation Error Trapping combines part of the cover polynomial approach of EKA with PET resulting in retaining good performance with low complexity. For attempting to decode Reed Solomon codes using soft decision values, the application of a modified Dorsch Algorithm to Reed Solomon codes on various issues has been evaluated. Using a binary form of Reed Solomon codes has been found to be able to achieve near maximum likelihood performance with very few decodings.
Style APA, Harvard, Vancouver, ISO itp.
9

Sridharan, Arun. "Low Complexity Scheduling in Wireless Networks". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366072589.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Detert, Thorben. "Low complexity co-channel interference cancellation". Göttingen Sierke, 2008. http://d-nb.info/991223314/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Næss, Hallvard. "A programmable DSP for low-power, low-complexity baseband processing". Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9439.

Pełny tekst źródła
Streszczenie:

Software defined radio (SDR) is an emerging trend of radio technology. The idea is basically to move software as close to the antenna of a radio system as possible, to improve flexibility, adaptability and time-to-market. This thesis covers the description of a DSP architecture especially optimized for modulation / demodulation algorithms of low-complexity, low-power radio standards. The DSP allows software processing of these algorithms, making SDR possible. To make the DSP competitive to traditional ASIC modems, tough constraints are given for area and power consumption. Estimates done to indicate the power consumption, area and computational power of the DSP, shows that a software implementation of the studied physical layer should be possible within the given constraints.

Style APA, Harvard, Vancouver, ISO itp.
12

Johansson, Kenny. "Low Power and Low complexity Constant Multiplication using Serial Arithmetic". Licentiate thesis, Linköping : Department of Electrical Engineering, Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7965.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Altrabalsi, Hana. "Low-complexity low-rate residential non-intrusive appliance load monitoring". Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28506.

Pełny tekst źródła
Streszczenie:
Large-scale smart metering deployments and energy saving targets across the world have ignited renewed interest in residential non-intrusive appliance load monitoring (NALM), that is, disaggregating total household's energy consumption down to individual appliances, using purely analytical tools. Despite increased research efforts, NALM techniques that can disaggregate power loads at low sampling rates are still not accurate and/or practical enough, requiring substantial customer input and long training periods. In this thesis, we address these challenges via a practical low complexitylow-rate NALM, by proposing two approaches based on a combination of the following machine learning techniques: k-means clustering and Support Vector Machine, exploiting their strengths and addressing their individual weaknesses. The first proposed supervised approach is a low-complexity method that requires very short training period and is robust to labelling errors. The second, unsupervised approach relies on a database of appliance signatures that we designed using publicly available datasets. The database compactly represents over 100 appliances using statistical modelling of measured active power. Experimental results on three datasets from US (REDD), Italy and Austria (GREEND) and UK (REFIT), demonstrate the reliability and practicality of the proposed approaches.
Style APA, Harvard, Vancouver, ISO itp.
14

Host, Nicholas K. "Realization of a Low Cost Low Complexity Traveling Wave Antenna". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420038529.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Johansson, Kenny. "Low Power and Low Complexity Shift-and-Add Based Computations". Doctoral thesis, Linköping : Department of Electrical Engineering, Linköping University, 2008. http://www.bibl.liu.se/liupubl/disp/disp2008/tek1201s.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Zeffer, Håkan. "Towards Low-Complexity Scalable Shared-Memory Architectures". Doctoral thesis, Uppsala University, Department of Information Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7135.

Pełny tekst źródła
Streszczenie:

Plentiful research has addressed low-complexity software-based shared-memory systems since the idea was first introduced more than two decades ago. However, software-coherent systems have not been very successful in the commercial marketplace. We believe there are two main reasons for this: lack of performance and/or lack of binary compatibility.

This thesis studies multiple aspects of how to design future binary-compatible high-performance scalable shared-memory servers while keeping the hardware complexity at a minimum. It starts with a software-based distributed shared-memory system relying on no specific hardware support and gradually moves towards architectures with simple hardware support.

The evaluation is made in a modern chip-multiprocessor environment with both high-performance compute workloads and commercial applications. It shows that implementing the coherence-violation detection in hardware while solving the interchip coherence in software allows for high-performing binary-compatible systems with very low hardware complexity. Our second-generation hardware-software hybrid performs on par with, and often better than, traditional hardware-only designs.

Based on our results, we conclude that it is not only possible to design simple systems while maintaining performance and the binary-compatibility envelope, it is often possible to get better performance than in traditional and more complex designs.

We also explore two new techniques for evaluating a new shared-memory design throughout this work: adjustable simulation fidelity and statistical multiprocessor cache modeling.

Style APA, Harvard, Vancouver, ISO itp.
17

List, Nancy Brown. "Low-Complexity Interleaver Design for Turbo Codes". Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5096.

Pełny tekst źródła
Streszczenie:
A low-complexity method of interleaver design, sub-vector interleaving, for both parallel and serially concatenated convolutional codes (PCCCs and SCCCs, respectively) is presented here. Since the method is low-complexity, it is uniquely suitable for designing long interleavers. Sub-vector interleaving is based on a dynamical system representation of the constituent encoders employed by PCCCs and SCCCs. Simultaneous trellis termination can be achieved with a single tail sequence using sub-vector interleaving for both PCCCs and SCCCs. In the case of PCCCs, the error floor can be lowered by sub-vector interleaving which allows for an increase in the weight of the free distance codeword and the elimination of the lowest weight codewords generated by weight-2 terminating input sequences that determine the error floor at low signal-to-noise ratios (SNRs). In the case of SCCCs, sub-vector interleaving lowers the error floor by increasing the weight of the free distance codewords. Interleaver gain can also be increased for SCCCs by interleaving the lowest weight codewords from the outer into non-terminating input sequences to the inner encoder. Sub-vector constrained S-random interleaving, a method for incorporating S-random interleaving into sub-vector interleavers, is also proposed. Simulations show that short interleavers incorporating S-random interleaving into sub-vector interleavers perform as well as or better than those designed by the best and most complex methods for designing short interleavers. A method for randomly generating sub-vector constrained S-random interleavers that maximizes the spreading factor, S, is also examined. The convergence of the turbo decoding algorithm to maximum-likelihood decisions on the decoded input sequence is required to demonstrate the improvement in BER performance caused by the use of sub-vector interleavers. Convergence to maximum-likelihood decisions by the decoder do not always occur in the regions where it is feasible to generate the statistically significant numbers of error events required to approximate the BER performance for a particular coding scheme employing a sub-vector interleaver. Therefore, a technique for classifying error events by the mode of convergence of the decoder is used to illuminate the effect of the sub-vector interleaver at SNRs where it is possible to simulate the BER performance of the coding scheme.
Style APA, Harvard, Vancouver, ISO itp.
18

Rabbachin, A. (Alberto). "Low complexity UWB receivers with ranging capabilities". Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514288005.

Pełny tekst źródła
Streszczenie:
Abstract This Thesis examines low complexity receiver structures for impulse-radio (IR) ultra-wideband (UWB) systems to be used in wireless sensor network applications. Such applications require radio communication solutions characterized by low cost, low complexity hardware and low power consumption to provide very long battery life. Analysis of several auto-correlation receiver (AcR) structures is performed in the presence of additive white Gaussian noise to identify receiver structures that offer a good compromise between implementation complexity and data communication performance. The classes of receiver that demonstrate the best complexity/performance trade-off are shown to be the AcR utilising transmitted-reference with binary pulse amplitude modulation signaling, and the energy detector (ED) utilising binary pulse position modulation. The analysis of these two schemes is extended to consider multipath fading channels. Numerically integrable bit error rate probability (BEP) expressions are derived in order to evaluate the receivers' performance in the presence of fading distributions characterized by closed form characteristic functions. Simulations utilising widely accepted UWB channel models are then used to evaluate the BEP in different indoor environments. Since UWB systems share frequency spectrum with many narrowband (NB) systems, and need to coexist with other UWB systems, the performance of low complexity receivers can be seriously affected by interference. In the presence of NB interference, two cases have been considered: 1) single NB interference, where the interfering node is located at a fixed distance from the receiver, and 2) multiple NB interference, where the interfering nodes are scattered according to a spatial Poisson process. When considering UWB interference, the case of multiple sources of interference has been considered. For both the multiple NB and the multiple UWB interference cases, the model derived considers several interference parameters, which can be integrated into BEP formulations for quick performance evaluations. The framework is sufficiently simple to allow tractable analysis and can serve as a guideline for the design of heterogeneous networks where coexistence between UWB systems and NB systems is of importance. The very large bandwidth of UWB signals offers an unprecedented possibility for accurate ranging operations. Signal leading-edge estimation algorithms based on average maximum likelihood estimators are derived considering different multipath channel fading distributions. Suboptimal solutions are proposed and investigated in order to support ranging capabilities in low complexity receiver structures. The ability to identify line-of-sight and non-line-of-sight conditions with the ED-based receiver is also addressed. An example of an IR-UWB low complexity transceiver based on ED for sensor network applications is proposed in this Thesis. Ad-hoc solutions for pulse transmission, synchronization and data detection are developed.
Style APA, Harvard, Vancouver, ISO itp.
19

Ferdinand, N. S. (Nuwan Suresh). "Low complexity lattice codes for communication networks". Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526210964.

Pełny tekst źródła
Streszczenie:
Abstract Lattice codes achieve AWGN capacity and naturally fit in many multi-terminal networks because of their inherited structure. Although extensive information theoretic research has been done to prove the importance of lattice codes for these networks, the progress in finding practical low-complexity lattice schemes is limited. Hence, the motivation of this thesis is to develop several methods to make lattice codes practical for communication networks. First, we propose an efficient lattice coding scheme for real-valued, full-duplex one- and two-way relay channels. Lattice decomposition, superposition, and block Markov encoding are used to propose a simple, yet near capacity achieving encoding/decoding schemes for these relay channels. By using information theoretic tools, we prove the achievable rates of these schemes, which are equal to the best known rates. Then, we construct practical, low-complexity implementations of the proposed relay schemes using low-density lattice codes. Numerical evaluation is presented and they show that our schemes achieve performance as close as 2.5dB away from theoretical limits. The effect of shaping/coding loss on the performance of relay channels is studied. Then, we propose a low complexity lattice code construction that provides high shaping and coding gains. First, integer information is encoded to shaped integers. Two methods are proposed for this task: ''Voronoi integers'' and ''non uniform integers''. These shaped integers have shaping gains over the integer lattice. Then for the second step, we present a general framework to systematically encode these integers, using any high dimensional lattice with lower-triangular generator or parity check matrices, retaining the same shaping gain. The proposed scheme can be used to shape high dimensional lattices such as low density lattice codes, LDA-lattice, etc. Comprehensive analysis is presented using low density lattice codes. By using E8 and BW16 as shaping lattices, we numerically show the Voronoi integers result in the shaping gain of these lattices, that is, as much as 0.65dB and 0.86dB. It is numerically observed that non-uniform integers have shaping gains of up to 1.25dB. These shaping operations can be implemented with less complexity than previous low density lattice codes shaping approaches and shaping gains are higher than in previously reported cases, which are in the order of 0.4dB. Lastly, we propose a low complexity practical code construction for compute-and-forward. A novel code construction called ''mixed nested lattice code construction'' is developed. This code construction uses a pair of distinct nested lattices to encode the integers where shaping is provided by a small dimensional lattice with high shaping gain and coding is performed using a high coding gain and a high dimensional lattice. This construction keeps the shaping and the coding gains of respective shaping and coding lattices. Further, we prove an existence of an isomorphism in this construction such that linear combination of lattice codes can be mapped to a linear combination of integers over a finite field. Hence, this construction can be readily used for any compute-and-forward applications. A modified LDLC decoder is proposed to estimate a linear combination of messages. Performance is numerically evaluated
Tiivistelmä Hilakoodit saavuttavat AWGN kapasiteetin ja sopivat luonnollisesti moniin monen päätelaitteen verkkoihin niihin sisältyvän rakenteen vuoksi. Vaikka lukuisat informaatioteoreettiset tutkimustyöt todistavat hilakoodien tärkeyden näille verkoille, käytännössä alhaisen kompleksisuuden hilajärjestelmiä on vielä vähän. Näin ollen tämän tutkielman tarkoitus on kehittää useita metodeja, jotta hilakoodeista saadaan käytännöllisiä viestintäverkkoihin. Aluksi, ehdotamme tehokkaan hilakoodausjärjestelmän reaaliarvoisille, full duplexisille yksi- ja kaksisuuntaisille välittäjäkanaville. Käytämme hilan hajottamista, superpositiota ja lohko-Markov -koodausta ehdottaessamme yksinkertaiset ja siltikin kapasiteetin saavuttavat koodaus- ja dekoodausjärjestelmät näihin välityskanaviin. Käyttämällä informaatioteoreettisia työkaluja, osoitamme näiden järjestelmien saavutettavat nopeudet, jotka ovat yhtä suuret kuin parhaimmat tunnetut nopeudet. Sitten rakennamme käytännölliset ja alhaisen monimutkaisuuden toteutukset ehdotetuille välitysjärjestelmille käyttäen alhaisen tiheyden hilakoodeja. Esitämme näille järjestelmille numeeriset arvioinnit, jotka näyttävät että nämä toteutukset saavuttavat tehokkuuden, joka on 2.5dB:n päässä teoreettisista rajoista. Tutkimme muotoilu- ja koodaushäviön vaikutusta välityskanavien tehokkuuteen. Sitten, ehdotamme alhaisen monimutkaisuuden hilakoodirakenteen, joka tarjoaa korkean muotoilu- ja koodausvahvistuksen. Ensin, kokonaislukuinformaatio on koodattu muotoiltuihin kokonaislukuihin. Esitämme kaksi metodia tähän tehtävään; 'Voronoi kokonaisluvut' ja 'ei yhtenäiset kokonaisluvut'. Näillä muotoilluilla kokonaisluvuilla on muotoiluvahvistusta kokonaislukuhilalle. Toisena askeleena, esitämme yleiset puitteet systemaattiseen kokonaislukujen koodaukseen käyttäen korkeaulotteisia hiloja alhaisen kolmiogeneraattori- tai pariteettivarmistusmatriiseja, jotka säilyttävät samalla muotoiluvahvistuksen. Ehdotettua järjestelmää voidaan käyttää muotoilemaan korkeaulotteisia hiloja kuten alhaisen tiheyden hilakoodeja, LDA-hiloja, jne. Esitämme kattavan analyysin käyttäen alhaisen tiheyden hilakoodeja. Käyttämällä muotoiluhiloina E8aa ja BW16a, näytämme numeerisesti 'Voronoi kokonaislukujen' käyttämisen seurauksena saavutettavat hilojen muotoiluvahvistukset, jotka ovat jopa 0.65dB ja 0.86dB. Näytämme myös numeerisesti että 'ei yhtenäisillä kokonaisluvuilla' on muotoiluvahvistusta jopa 1.25dB. Nämä muotoiluoperaatiot voidaan toteuttaa alhaisemmalla monimutkaisuudella kuin aikaisemmat 'alhaisen tiheyden hilakoodien muotoilumenetelmät' ja muotoiluvahvistukset ovat suuremmat kuin aikaisemmin raportoidut tapaukset, jotka ovat suuruusluokaltaan 0.4dB. Viimeiseksi, ehdotamme käytännöllisen koodikonstruktion alhaisella monimutkaisuudella 'laske ja lähetä' -menetelmään. Kehitämme uuden koodikonstruktion, jota kutsumme 'sekoitetuksi sisäkkäiseksi hilakoodikonstruktioksi'. Tämä koodikonstruktio käyttää kahta eroteltavissa olevaa sisäkkäistä hilaa koodaamaan kokonaisluvut siellä, missä muotoilu tehdään pienen ulottuvuuden hiloilla korkean muotoiluvahvistuksella ja koodaus toteutetaan käyttäen korkean koodausvahvistuksen omaavaa korkeaulottuvuuksista hilaa. Tämä konstruktio säilyttää muotoilu- ja koodausvahvistukset kullekin muotoilu- ja koodaushilalle. Lisäksi, todistamme isomorfismin olemassaolon tässä konstruktiossa siten, että lineaarisen hilakoodien kombinaatio voidaan kuvata lineaarisena kokonaislukujen kombinaationa äärellisessä kunnassa. Näin ollen tätä konstruktiota voidaan helposti käyttää missä tahansa 'laske ja lähetä' -sovelluksessa. Esitämme muokatun LDLC dekooderin lineaarisen viestikombinaation estimointiin. Arvioimme tehon numeerisesti
Style APA, Harvard, Vancouver, ISO itp.
20

Hung, Kevin Chun-Yeung. "A novel low-complexity IF-sampling receiver". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20872.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Zeffer, Håkan. "Towards low-complexity scalable shared-memory architectures /". Uppsala : Acta Universitatis Upsaliensis, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Dempster, Andrew. "Digital filter design for low-complexity implementation". Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362967.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Perry, Russell. "Low complexity adaptive equalisation for wireless applications". Thesis, University of Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389138.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Coleman, Todd P. (Todd Prentice) 1977. "Low-complexity approaches to distributed data dissemination". Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35286.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 145-153).
In this thesis we consider practical ways of disseminating information from multiple senders to multiple receivers in an optimal or provably close-to-optimal fashion. The basis for our discussion of optimal transmission of information is mostly information theoretic - but the methods that we apply to do so in a low-complexity fashion draw from a number of different engineering disciplines. The three canonical multiple-input, multiple-output problems we focus our attention upon are: * The Slepian-Wolf problem where multiple correlated sources must be distributedly compressed and recovered with a common receiver. * The discrete memoryless multiple access problem where multiple senders communicate across a common channel to a single receiver. * The deterministic broadcast channel problem where multiple messages are sent from a common sender to multiple receivers through a deterministic medium. Chapter 1 serves as an introduction and provides models, definitions, and a discussion of barriers between theory and practice for the three canonical data dissemination problems we will discuss. Here we also discuss how these three problems are all in different senses 'dual' to each other, and use this as a motivating force to attack them with unifying themes.
(cont.) Chapter 2 discusses the Slepian-Wolf problem of distributed near-lossless compression of correlated sources. Here we consider embedding any achievable rate in an M-source problem to a corner point in a 2M - 1-source problem. This allows us to employ practical iterative decoding techniques and achieve rates near the boundary with legitimate empirical performance. Both synthetic data and real correlated data from sensors at the International Space Station are used to successfully test our approach. Chapter 3 generalizes the investigation of practical and provably good decoding algorithms for multiterminal systems to the case where the statistical distribution of the memoryless system is unknown. It has been well-established in the theoretical literature that such 'universal' decoders exist and do not suffer a performance penalty, but their proposed structure is highly nonlinear and therefore believed to be complex. For this reason, most discussion of such decoders has been limited to the realm of ontology and proof of existence. By exploiting recently derived results in other engineering disciplines (i.e. expander graphs, linear programming relaxations, etc), we discuss a code construction and two decoding algorithms that have polynomial complexity and admit provably good performance (exponential error probability decay).
(cont.) Because there is no need for a priori statistical knowledge in decoding (which in many settings - for instance a sensor network - might be difficult to repeatedly acquire without significant cost), this approach has very attractive robustness, energy efficiency, and stand-alone practical implications. Finally, Chapter 4 walks away from the multiple-sender, single-receiver setting and steps into the single-sender-multiple receiver setting. We focus our attention here on the deterministic broadcast channel, which is dual to the Slepian-Wolf and multiple access problems in a number of ways - including how the difficulty of practical implementation lies in the encoding rather than decoding. Here we illustrate how again a splitting approach can be applied, and how the same properties from the Slepian-Wolf and multiple access splitting settings remain. We also discuss practical coding strategies for some problems motivated by wireless, and show how by properly 'dualizing' provably good decoding strategies for some channel coding problems, we admit provably good encoding for this setting.
by Todd Prentice Coleman.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
25

Lombardi, Alex (Alex John). "Low complexity pseudorandom generators and indistinguishability obfuscation". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118056.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 61-64).
In the study of cryptography in NCO, it was previously known that Goldreich's candidate pseudorandom generator (PRG) is insecure when instantiated with a predicate P in 4 or fewer variables, if one wants to achieve polynomial stretch. On the other hand, there is a standard candidate PRG with locality 5 based on the "tri-sum-and" predicate ... However, locality is only one complexity measure of a PRG that one could hope to minimize. In this work, we consider the problem of minimizing three other complexity measures of a (local) PRG: decision tree (DT-)complexity, Q-degree (i.e., the degree of P as a polynomial over Q), and the recent notion of blockwise locality (due to Lin and Tessaro). These three complexity measures are all of interest for their possible applications to constructing indistinguishability obfuscation (IO) schemes based on low-degree multilinear maps. Indeed, Lin and Tessaro recently proposed an intriguing candidate IO scheme based on bilinear maps and a non-standard assumption on "Goldreich-like" pseudorandom generators. We obtain both positive and negative results on the existence of low complexity PRGs. First, we give a candidate predicate for Goldreich's PRG with DT-complexity 4 and Q-degree 3. We also show that all predicates with either DT-complexity less than 4 or Q-degree less than 3 yield insecure PRGs, so our candidate predicate simultaneously achieves the best possible locality, DT-complexity, Q-degree, and F2-degree according to all known attacks. Finally, we show polynomial-time attacks on the blockwise 2-local PRGs required in the Lin-Tessaro work, invalidating the security of their IO and FE candidates based on bilinear maps. Our attack uses tools from the literature on two-source extractors (Chor and Goldreich, SICOMP 1988) and efficient refutation of random 2-XOR instances (Charikar and Wirth, FOCS 2004).
by Alex Lombardi.
S.M. in Computer Science and Engineering
Style APA, Harvard, Vancouver, ISO itp.
26

Joshi, Yetish. "Low complexity in-loop perceptual video coding". Thesis, Middlesex University, 2016. http://eprints.mdx.ac.uk/21278/.

Pełny tekst źródła
Streszczenie:
The tradition of broadcast video is today complemented with user generated content, as portable devices support video coding. Similarly, computing is becoming ubiquitous, where Internet of Things (IoT) incorporate heterogeneous networks to communicate with personal and/or infrastructure devices. Irrespective, the emphasises is on bandwidth and processor efficiencies, meaning increasing the signalling options in video encoding. Consequently, assessment for pixel differences applies uniform cost to be processor efficient, in contrast the Human Visual System (HVS) has non-uniform sensitivity based upon lighting, edges and textures. Existing perceptual assessments, are natively incompatible and processor demanding, making perceptual video coding (PVC) unsuitable for these environments. This research allows existing perceptual assessment at the native level using low complexity techniques, before producing new pixel-base image quality assessments (IQAs). To manage these IQAs a framework was developed and implemented in the high efficiency video coding (HEVC) encoder. This resulted in bit-redistribution, where greater bits and smaller partitioning were allocated to perceptually significant regions. Using a HEVC optimised processor the timing increase was < +4% and < +6% for video streaming and recording applications respectively, 1/3 of an existing low complexity PVC solution. Future work should be directed towards perceptual quantisation which offers the potential for perceptual coding gain.
Style APA, Harvard, Vancouver, ISO itp.
27

Bontu, Chandra Sekhar Carleton University Dissertation Engineering Systems and Computer. "Low complexity equalization for indoor wireless communications". Ottawa, 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Tang, Xiangguo. "Low complexity equalization approaches in wireless communications /". For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2002. http://uclibs.org/PID/11984.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Mercat, Alexandre. "Complexity Control for Low-Power HEVC Encoding". Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0035/document.

Pełny tekst źródła
Streszczenie:
L'Internet des objets (loT) est devenu une réalité et ses applications pressenties vont fortement augmenter la demande de vidéo mobile. En conséquence, les systèmes montent en complexité algorithmique et le portage du codage vidéo sur plates-formes embarquées devient problématique. Les nouveaux contenus vidéo 4K et 360°, venant avec des résolutions spatiales (8K, 16K) et temporelles (120 images/seconde élevées compliquent encore le problème. Il est donc nécessaire de réduire l'empreinte des nouveaux codec tels que HEVC tout en préservant les performances en compression et en qualité d'image de ces codecs, La performance énergétique limitée des batteries des systèmes embarqués pousse à proposer de nouvelle méthodes pour ajuster et contrôler la complexité et l'énergie des codecs HEVC. Ce document propose un ensemble d'études dont l'objectif est d'ajuster et de contrôler la complexité et donc la consommation énergétique de l'encodeur HEVC. Deux méthodes de prédiction de découpe de CTU sont proposées : la première basée sur une approche statistique utilisant la variance de l'image et la seconde utilisant l'intelligence artificielle. À partir de cette prédiction, une méthode est proposée pour ajuster la complexité de l'encodage HEVC. Cette solution étend l'espace de recherche autour de la prédiction et alloue la complexité dans l'image afin de minimiser les dégradations en termes de compression et de qualité. Enfin un système de contrôle temps réel de la complexité d'encodage est proposé. Il démontre l'applicabilité de contributions de ce document en maintenant la complexité d'encodage proche d'une consigne
The Internet of Things (loT) is now a reality. Forthcoming applications will boost mobile video demand to an unprecedented level. The induced increase in computational complexity is a challenge when executing in real-time new video coding standards on embedded platforms, limited in computing, memory, and energy. New 4K UHD and 360-degree video contents coming with high spatial (SK, 16K) and temporal (120fp resolutions further complicate the problem. In this context, codecs such as HEVC (High Efficiency Vide Coding) must be worked on to reduce their complexity while preserving the bitrate and image quality. Th bounded energy density of embedded system's batteries requires designers to propose new methods scaling and controlling the complexity and energy consumption of HEVC codecs. This document presents a set of studies aiming at scaling and controlling the complexity, and therefore the energy consumption, of HEVC Intra encoding. Two methods of quad-tree partitioning prediction in "one-shot are proposed: one based on variance-aware statistic approach and one based on Machine Learning using data-mining classifiers. From the obtained prediction, a generic tunable complexity scheme of HEVC encoding is introduced. It expands the search space around the original partitioning prediction and allocates complexit in a frame while minimizing performance loss in terms of bitrate and visual quality. Finally, a real-time contr system is created that dynamically manages the encoding process to keep the encoding complexity under specific tarjet. It demonstrates the a licability of the mayor contributions of this document
Style APA, Harvard, Vancouver, ISO itp.
30

Planjery, Shiva Kumar. "Low-Complexity Finite Precision Decoders for Low-Density Parity-Check Codes". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605947.

Pełny tekst źródła
Streszczenie:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
We present a new class of finite-precision decoders for low-density parity-check (LDPC) codes. These decoders are much lower in complexity compared to conventional floating-point decoders such as the belief propagation (BP) decoder, but they have the potential to outperform BP. The messages utilized by the decoders assume values (or levels) from a finite discrete set. We discuss the implementation aspects as well as describe the underlying philosophy in designing these decoders. We also provide results to show that in some cases, only 3 bits are required in the proposed decoders to outperform floating-point BP.
Style APA, Harvard, Vancouver, ISO itp.
31

Huhn, Alexander. "Low-complexity multiple-antenna systems for low-rate wireless personal area networks". Thesis, University of Reading, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558802.

Pełny tekst źródła
Streszczenie:
Wireless Personal Area Networks are increasingly widespread In their application. These range from simple remote controls to highly complex networks that allow the communication of heterogeneous data-collecting devices and actor devices via routing nodes and gateways, to networks such as wired JP networks. Key performance aspects for wireless personal area networks are the large number of nodes capable of working within the same reception area and the energy consumption of such nodes. This work shows how equalisation for wireless personal area networks can be realised with a small increase in hardware, in order to achieve performance improvements in highly dispersive environments. The proposed ideas are extended to a multiple-antenna solution which can be made downwards-compatible to the appropriate IEEE standard. In this manner, the performance in terms of either bit rate or robustness can be improved. For both equalisation and multiple-antenna procedures it is shown how the existing pre-amble can be used to find an estimate of the channel impulse response. This processing is performed in the time domain. Equalisation as well as multi-antenna channel separation is achieved by one set of cyclic filters operating at the chip rate. This allows a simple hardware design for integration. Selected methods are tested in real-time in a testbed for wireless personal area networks and.are shown to offer real performance improvements.
Style APA, Harvard, Vancouver, ISO itp.
32

Osman, Ammar. "Low-complexity OFDM transceiver design for UMTS-LTE". Thesis, Blekinge Tekniska Högskola, Avdelningen för signalbehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3716.

Pełny tekst źródła
Streszczenie:
Over the past two decades the mobile wireless communication systems has been growing fast and continuously. Therefore, the standardization bodies together with wireless researchers and mobile operators around the globe have been constantly working on new technical specifications in order to meet the demand for this rapid growth. The 3rd Generation Partnership Project (3GPP) one of the largest of such standardization bodies, works on developing the current third generation (3G) mobile telecommunication systems towards the future 4th generation. Research towards meeting the higher demands for higher data rates was the main reason for the birth of an evolution technology towards the 4th generation mobile systems. This evolution to the current 3rd generation UMTS systems was given the name E-UTRA/UTRAN Long Term Evolution (LTE) by the 3GPP. This thesis research has been carried out at the Telecommunications Research Center (ftw.) in Vienna. It was conducted in the framework of the C10 project “Wireless Evolution Beyond 3G”. One of the fields of research within this project is to have a special focus on the OFDM modulation schemes that are discussed under the new evolution technology (LTE) of the UMTS mobile networks. Therefore, this thesis focuses mainly in analyzing the new requirements, and evaluating them by designing a low-complexity UMTS-LTE OFDM based transceiver. This thesis aims mainly in studying the feasibility of this technology by means of simulation.
Tel: +46-704469795 Email: osman@ftw.at,amos04@student.bth.se, ammarmao@gmail.com
Style APA, Harvard, Vancouver, ISO itp.
33

Morawski, Robert. "Low-complexity structures for digital symbol timing recovery". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0017/MQ47829.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
34

Wong, Wai William. "Low complexity IF-sampling receiver for DS/SS". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0019/MQ49693.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Weisenhorn, Martin. "Low-complexity techniques for ultra-wideband communication systems". kostenfrei, 2007. http://mediatum2.ub.tum.de/doc/625801/document.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Ismail, Amr. "Low Complexity Space-Time coding for MIMO systems". Phd thesis, Supélec, 2011. http://tel.archives-ouvertes.fr/tel-00771982.

Pełny tekst źródła
Streszczenie:
The last few years witnessed a dramatic increase in the demand on high-rate reliable wireless communications. In order to meet these new requirements, resorting to Multiple-Input Multiple-Output (MIMO) techniques was inevitable as they may offer high-rate reliable wireless communications without any additional bandwidth. In the case where the transmitter does not have any prior knowledge about the channel state information, space-time coding techniques have proved to efficiently exploit the MIMO channel degrees of freedom while taking advantage of the maximum diversity gain. On the other hand, the ML decoding complexity of Space-Time Codes (STCs) generally increases exponentially with the rate which imposes an important challenge to their incorporation in recent communications standards. Recognizing the importance of the low-complexity criterion in the STC design for practical considerations, this thesis focuses on the design of new low-complexity Space-Time Block Codes (STBCs) where the transmitted code matrix can be expressed as a weighted linear combination of information symbols and we propose new codes that are decoded with a lower complexity than that of their rivals in the literature while providing better or slightly lower performance.
Style APA, Harvard, Vancouver, ISO itp.
37

Coria, Mendoza Lino Evgueni. "Low-complexity methods for image and video watermarking". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/571.

Pełny tekst źródła
Streszczenie:
For digital media, the risk of piracy is aggravated by the ease to copy and distribute the content. Watermarking has become the technology of choice for discouraging people from creating illegal copies of digital content. Watermarking is the practice of imperceptibly altering the media content by embedding a message, which can be used to identify the owner of that content. A watermark message can also be a set of instructions for the display equipment, providing information about the content’s usage restrictions. Several applications are considered and three watermarking solutions are provided. First, applications such as owner identification, proof of ownership, and digital fingerprinting are considered and a fast content-dependent image watermarking method is proposed. The scheme offers a high degree of robustness against distortions, mainly additive noise, scaling, low-pass filtering, and lossy compression. This method also requires a small amount of computations. The method generates a set of evenly distributed codewords that are constructed via an iterative algorithm. Every message bit is represented by one of these codewords and is then embedded in one of the image’s 8 × 8 pixel blocks. The information in that particular block is used in the embedding so as to ensure robustness and image fidelity. Two watermarking schemes designed to prevent theatre camcorder piracy are also presented. In these methods, the video is watermarked so that its display is not permitted if a compliant video player detects the watermark. A watermark that is robust to geometric distortions (rotation, scaling, cropping) and lossy compression is required in order to block access to media content that has been recorded with a camera inside a movie theatre. The proposed algorithms take advantage of the properties of the dual-tree complex wavelet transform (DT CWT). This transform offers the advantages of both the regular and the complex wavelets (perfect reconstruction, approximate shift invariance and good directional selectivity). Our methods use these characteristics to create watermarks that are robust to geometric distortions and lossy compression. The proposed schemes are simple to implement and outperform comparable methods when tested against geometric distortions.
Style APA, Harvard, Vancouver, ISO itp.
38

Nava, Alain Coletta. "Characterisation of low-complexity regions in protein sequences". Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.493949.

Pełny tekst źródła
Streszczenie:
Low-complexity regions appear to be common in protein sequences and although funcons remain to be elucidated for most of them, they have been increasingly found to play crucial biological roles. Because most of these regions show poor conservation across protein families, it is difficult to compare them using common sequence analysis techniques. This study asks whether low-complexity regions play important roles in protein bind and if they are crucial to the overall functions of the proteins to which they belong.
Style APA, Harvard, Vancouver, ISO itp.
39

Pandya, Nishit. "Switchable-rate LDPC Codes with Low Complexity Decoding". Thesis, Lancaster University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518140.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Nelson, Christopher J. "Low complexity capacity-approaching codes for data transmission". Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556663.

Pełny tekst źródła
Streszczenie:
This thesis analyzes the design of low complexity capacity approaching codes suitable for data transmission. The research documented in this thesis describes new and novel design methods for three well-known error control coding techniques, Turbo codes, LDPC block codes and LDPC convolutional codes, which are suitable for implementation in a number of modem digital communication systems. Firstly, we present Partial Unit Memory (PUM) based Turbo codes. A variant of Turbo codes which encompasses the advantages of both block and convolutional codes. The design methods of PUM Turbo codes are presented and Bit Error Rate (BER) simulations and Extrinsic Information Transfer (EXIT) chart analysis illustrates their performance. Partial Unit Memory codes are a class of low complexity, non-binary convolutional codes and have been shown to outperform equivalent convolutional codes. We present the EXIT charts of parallel concatenated PUM codes and PUM Woven Turbo Codes and analyse them to assess their performance compared with standard Turbo code designs. Resulting Extrinsic Information Transfer charts indicate that the proposed PUM-based codes have higher mutual information during iterative decoding than the equivalent Recursive, Systematic, Convolutional Turbo codes (RSC- TC) for the same Eb/No, i.e. the output of the decoders provides a better approximation of the decoded bits. The EXIT chart analysis is supported by BER plots, which confirms the behaviour predicted by the EXIT charts. We show that the concatenated PUM codes outperform the well-known turbo codes in the waterfall region, with comparable performance in the error floor region. In the second section we present Low Density Generator Matrix codes; a variant of LDPC codes that have low complexity encoding and decoding techniques. We present results of three construction methods and describe how LDGM codes can be modified to improve the error-floor region. We describe the design of random, structured and semi-random, semi- structured codes and how, by replacing the identity matrix with a staircase matrix, LDGM codes can show significant improvements in the error-floor region. Furthermore, we analyse the performance of serially concatenated LDGM codes and how they can benefit when we use the modified LDGM codes in either the outer code or the inner code. The results indicate that concatenated LDGM codes that incorporate LDGM staircase codes in the inner code will show improvements in error-floor performance while maintaining near capacity limit performances. While in the case of LDGM staircase codes being used as the outer codes no significant improvements in waterfall or error-floor regions are observed compared to a concatenated scheme that employs an LDGM identity outer code. Finally, we propose a new design of LDPC convolutional code, which we term as time invariant Low Density Parity Check Unit Memory (LDPC-UM) codes. The performance of LDPC block and Low Density Parity Check Unit Memory codes are compared, in each case, the Low Density Parity Check Unit Memory codes performance is at least as good as that of the LDPC block codes from which they are derived. LDPC-UM codes are the convolutional counterparts of LDPC block codes. Here, we describe techniques for the design of low complexity time invariant LDPC-UM codes by unwrapping the Tanner graph of algebraically constructed quasi-cyclic LDPC codes. The Tanner graph is then used to describe a pipelined message passing based iterative decoder for LDPC-UM codes and standard LDPC convolutional codes that outputs decoding results continuously.
Style APA, Harvard, Vancouver, ISO itp.
41

Sellars, Malcolm Paul. "Low complexity equalization techniques for broadband indoor radio". Thesis, University of Cambridge, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621727.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Danjean, Ludovic. "Low-Complexity Iterative Reconstruction Algorithms in Compressed Sensing". International Foundation for Telemetering, 2013. http://hdl.handle.net/10150/579661.

Pełny tekst źródła
Streszczenie:
ITC/USA 2013 Conference Proceedings / The Forty-Ninth Annual International Telemetering Conference and Technical Exhibition / October 21-24, 2013 / Bally's Hotel & Convention Center, Las Vegas, NV
In this paper we focus on two low-complexity iterative reconstruction algorithms in compressed sensing. These algorithms, called the approximate message-passing algorithm and the interval-passing algorithm, are suitable to recover sparse signals from a small set of measurements. Depending on the type of measurement matrix (sparse or random) used to acquire the samples of the signal, one or the other reconstruction algorithm can be used. We present the reconstruction results of these two reconstruction algorithms in terms of proportion of correct reconstructions in the noise free case. We also report in this paper possible practical applications of compressed sensing where the choice of the measurement matrix and the reconstruction algorithm are often governed by the constraint of the considered application.
Style APA, Harvard, Vancouver, ISO itp.
43

Oh, Han, i Yookyung Kim. "Low-Complexity Perceptual JPEG2000 Encoder for Aerial Images". International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595684.

Pełny tekst źródła
Streszczenie:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
A highly compressed image inevitably has visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies. However, this sensitivity has typically been measured at the near-threshold level where distortion is just noticeable. Thus, it is unclear that the same sensitivity applies at the supra-threshold level where distortion is highly visible. In this paper, we measure the sensitivity of the HVS for several supra-threshold distortion levels based on our JPEG2000 distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. For aerial images, the proposed encoder significantly reduces encoding time while maintaining superior visual quality compared with a conventional JPEG2000 encoder.
Style APA, Harvard, Vancouver, ISO itp.
44

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing". University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Salman, Naveed. "Optimised low complexity localisation in wireless sensor networks". Thesis, University of Leeds, 2013. http://etheses.whiterose.ac.uk/5823/.

Pełny tekst źródła
Streszczenie:
Wireless sensor networks (WSNs) consist of many small (up to several hundred) low powered sensing nodes. These nodes can be capable of sensing temperature, humidity, light intensity etc. In location aware WSNs, these nodes aside from sensing environmental conditions, can also locate themselves, thus promoting many new applications in the wireless communications industry. These applications may include firefighter tracking, cattle/wild life monitoring and logistics. One way to locate the nodes is to use global positioning system (GPS), however deploying a GPS chip on every sensor node is expensive and also they are power hungry. Moreover, GPS assisted nodes can only be located when a guaranteed line of sight (LoS) is present with the navigational satellites. On the other hand, nodes can also be located using low complexity and cheap local positioning systems (LPS). Various techniques can be found in literature to locate wireless sensor nodes. Location algorithms, which are based on the absolute distance between nodes are known as range based algorithms. On the other hand, algorithms that do not require determination of the inter-node distance for localisation are called rangefree positioning algorithms. Range free algorithms are based on the number of hops for communications between two nodes as a distance metric. Range based algorithms are however more accurate than range free algorithms. In the context of range based algorithm, distance can be estimated between nodes by making use of the angle of the impinging signal, this technique is more commonly known as the angle of arrival (AoA) technique. Apart from being very sensitive to errors due to multipath, AoA is not favored for low complexity WSN localisation as an array of antennas or microphones is required on the sensor nodes to estimate the angle of the incoming signal. This increases the complexity and cost of the system. Absolute distance can be estimated using either the delay or attenuation of the signal. Systems capitalizing on the delay are more commonly known as time of arrival (ToA) systems. ToA localisation, although more accurate, requires highly accurate clocks and hence are high in complexity. On the other hand, received signal strength (RSS) based systems require no additional hardware and hence are more suitable for WSNs. For location estimation via RSS (and ToA) the so called trilateration technique is used. A number of nodes, usually high in resources and with known locations known as anchor nodes (AN) are used to estimate the locations of target nodes (TN). The location of ANs can be determined using GPS or they can be placed at predetermined positions. Readings from the TN is received at the ANs and are transmitted to a central station for processing. Due to its straightforward implementation, RSS has been an advantageous approach for low cost localisation systems such as WSN localisation. Thus a major part of this thesis focuses on RSS based localisation. The accuracy of location estimates via RSS is highly dependent on knowledge of the distance-power gradient or the so called path-loss exponent (PLE). Thus, degraded system performance is expected with an inaccurate PLE assumption. Although the propagation model is difficult to characterize in uncertain environments, the majority of current studies assume to have exact knowledge of the PLE. This is a gross oversimplification and hence this thesis looks into methods that considers the PLE as an unknown variable in addition to the location coordinates of the target sensor node. Thus the first part of this thesis deals with joint estimation of the PLE and location based on maximum likelihood (ML) and linear least squares (LLS) methods respectively. Error analysis of location estimates with incorrect PLE assumptions for both ML and LLS technique is done in their respective chapters. Furthermore, novel ideas such as assuming the PLE as an unknown random variable and development of a maximum a posteriori (MAP) estimator has also been discussed. While the hybrid Cramer Rao bound (CRB) is derived as benchmark for the MAP estimator. To further optimize the performance of the LLS technique, optimization such as optimal AN selection and weighted least squares (WLS) methods have also been proposed. Finally, a new linear CRB has been derived as a benchmark for the performance of the LLS. The second part looks into another aspect of localisation that impacts the location accuracy i.e. AN/TN geometry. It is well known that the accuracy of TN location estimation depends on its relative angle with the ANs. Thus the placement of ANs has an impact on location accuracy. Optimal AN positions are achieved that guarantees best accuracy for the entire network area via extensive simulation. This is done via choosing the placement of ANs that offers the minimum mean CRB. Finally, the impact of localisation error on upper layer applications i.e. routing of packets is studied. For location based routing, the fundamental assumption until recently was the absolute knowledge of the location of the forwarding nodes. This becomes unrealistic in localised networks and hence algorithms that are resilient to location error need to be developed. However, the first step is to recognise the impact of location on geographic routing parameters such as the packet delivery ratio (PDR) and loss rate (LR). Thus, via simulation, error analysis is done for location error induced by ToA and RSS localisation. Furthermore, an algorithm is developed that reduces the performance degradation due to location error. The ascendancy of the proposed algorithm is proven via simulation.
Style APA, Harvard, Vancouver, ISO itp.
46

Drummond, Blair. "Limits of Rauzy Graphs of Low-Complexity Words". Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39594.

Pełny tekst źródła
Streszczenie:
We consider Benjamini-Schramm limits of Rauzy Graphs of low-complexity words. Low-complexity words are infinite words (over a finite alphabet), for which the number of subwords of length n is bounded by some Kn --- examples of such a word include the Thue-Morse word 01101001... and the Fibonacci word. Rauzy graphs Rn (omega) have the length n subwords of omega as vertices, and the oriented edges between vertices indicate that two words appear immediately adjacent to each other in omega (with overlap); the edges are also equipped with labels, which indicate what "new letter" was appended to the end of the terminal vertex of an edge. In a natural way, the labels of consecutive edges in a Rauzy graph encode subwords of omega. The Benjamini-Schramm limit of a sequence of graphs is a distribution on (possibly infinite) rooted graphs governed by the convergence in distribution of random neighborhoods of the sequence of finite graphs. In the case of Rauzy graphs without edge-labelings, we establish that the Rauzy graphs of aperiodic low-complexity words converge to the line graph in the Benjamini-Schramm sense. In the same case, but for edge-labelled Rauzy graphs, we also prove that that the limit exists when the frequencies of all subwords in the infinite word, omega, are well defined (when the subshift of omega is uniquely ergodic), and we show that the limit can be identified with the unique ergodic measure associated to the subshift generated by the word. The eventually periodic (i.e. finite) cases are also shown. Finally, we show that for non-uniquely ergodic systems, the Benjamini-Schramm limit need not exist ---though it can in some instances--- and we provide examples to demonstrate the variety of possible behaviors.
Style APA, Harvard, Vancouver, ISO itp.
47

Kong, Jia Hao. "Low-complexity, low-area computer architectures for cryptographic application in resource constrained environments". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/45394/.

Pełny tekst źródła
Streszczenie:
RCE (Resource Constrained Environment) is known for its stringent hardware design requirements. With the rise of Internet of Things (IoT), low-complexity and low-area designs are becoming prominent in the face of complex security threats. Two low-complexity, low-area cryptographic processors based on the ultimate reduced instruction set computer (URISC) are created to provide security features for wireless visual sensor networks (WVSN) by using field-programmable gate array (FPGA) based visual processors typically used in RCEs. The first processor is the Two Instruction Set Computer (TISC) running the Skipjack cipher. To improve security, a Compact Instruction Set Architecture (CISA) processor running the full AES with modified S-Box was created. The modified S-Box achieved a gate count reduction of 23% with no functional compromise compared to Boyar’s. Using the Spartan-3L XC3S1500L-4-FG320 FPGA, the implementation of the TISC occupies 71 slices and 1 block RAM. The TISC achieved a throughput of 46.38 kbps at a stable 24MHz clock. The CISA which occupies 157 slices and 1 block RAM, achieved a throughput of 119.3 kbps at a stable 24MHz clock. The CISA processor is demonstrated in two main applications, the first in a multilevel, multi cipher architecture (MMA) with two modes of operation, (1) by selecting cipher programs (primitives) and sharing crypto-blocks, (2) by using simple authentication, key renewal schemes, and showing perceptual improvements over direct AES on images. The second application demonstrates the use of the CISA processor as part of a selective encryption architecture (SEA) in combination with the millions instructions per second set partitioning in hierarchical trees (MIPS SPIHT) visual processor. The SEA is implemented on a Celoxica RC203 Vertex XC2V3000 FPGA occupying 6251 slices and a visual sensor is used to capture real world images. Four images frames were captured from a camera sensor, compressed, selectively encrypted, and sent over to a PC environment for decryption. The final design emulates a working visual sensor, from on node processing and encryption to back-end data processing on a server computer.
Style APA, Harvard, Vancouver, ISO itp.
48

Lin, Chien-Kuang, i 林建光. "Low-Complexity CodeBook Searching". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/12850529071114363385.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電信工程系
90
With the increasing demand for packet-voice transmission, low bit-rate speech coders gradually become a research trend. In February 1991, General Services Administration published Federal Standard 1016 (FS1016) which specifies the requirements for the conversion of analog voice to digital data by a method of 4.8 Kbps CELP. However, the major obstacle for applying it to real-time applications is the mass computational complexity in codebook search. In this thesis, we improve the FS1016 complexity by presenting a new codebook searching algorithm. The improvement of our new algorithm over the original one is not only illustrated by the reduction of the number of computations required in principle, but also demonstrated through a so-implemented FS1016 ACM driver under Windows Operating System. To conclude the research on the area of packet-voice transmission in our lab, a combined ACM driver with our fast searching algorithm, and a packet-loss recovery scheme developed previously has also been implemented.
Style APA, Harvard, Vancouver, ISO itp.
49

方耀諄. "Low Complexity Lossless Video Compression". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/dtv85n.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電子工程系所
92
We present a line-based adaptive lossless video compression (LALVC) algorithm for interactive multimedia applications that demand low complexity and low latency. Communications between high-resolution display and storage devices require high bandwidth for exchanging raw data. To reduce the cost of video transmission without losing data accuracy, a lossless video compression is necessary. Considering low complexity and low delay, the proposed LALVC adopts a simple and yet efficient architecture that consists of one-pass, raster-scan, transform-free coding process with a simple predictor. For low latency, zero-motion prediction and single frame buffer are used to reduce temporal redundancy. In addition, to maximize the coding efficiency for both natural and computer-generated video sequences, LALVC adaptively selects the best coding mode for each line in a frame. The entropy coding of each line is based on Golomb code to enhance coding efficiency with less computation load and is easy for hardware realization. The simulation results show that temporal preprocessing and line-based mode decision of LALVC can increase compression ratio with properly increased complexity as compared to that of JPEG-LS. In addition, to demonstrate easy hardware realization and real-time video compression, we realize hardware architecture of LALVC for ASIC design.
Style APA, Harvard, Vancouver, ISO itp.
50

Chu, Che-yi, i 朱哲毅. "Low complexity tone detection method". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/38376136075098911857.

Pełny tekst źródła
Streszczenie:
碩士
國立高雄第一科技大學
電腦與通訊工程研究所
101
Audio analysis has been widely used in various fields, such as speech recognition, carrier detect dial tone detection, tone detection for audio analysis is an important part of the need to use either voice recognition or carrier detect tone detection technology. The traditional analysis of the frequency dimension of the most commonly used method for the fast Fourier transform, the advantages of the fast Fourier transform is the strength of a plurality of frequencies can be analyzed simultaneously, the drawback is the need of large amounts of memory and computing resources. If you are using the Embedded System to tone detection and tone detection limit the number of tones, using the fast Fourier transform to do tone analysis cost is too high. In this paper, we propose a low complexity algorithm to do the tone detection, the hardware implementation of the method of low complexity, low computational complexity, low memory usage, it is suitable for most embedded systems. Tone detection methods used in this thesis is a variation of the autocorrelation function, and therefore have low computational complexity and low memory usage characteristics, however, do tone detection in frequency the closer the sampling autocorrelation function detection frequency the worse the problem. The method of this paper introduces a concept known as the "multiplier" in the text, because the concept of "multiplier" to improve the frequency gets closer to the sampling frequency detecting the worse the problem, there is an increased frequency of measurable range. Another problem is the self-correlation function to do the tone detection harmonic interference problems, the paper can be the harmonics caused by the interference eliminated, thus improving the problems caused by harmonic interference.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii