Dissertations / Theses on the topic 'Bit-level'

To see the other types of publications on this topic, follow the link: Bit-level.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 dissertations / theses for your research on the topic 'Bit-level.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lin, Wenjing. "Bit level diversity combining for D-MIMO." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106499.

Full text
Abstract:
Multiple-Input Multiple-Output (MIMO) transmission techniques have been shown to be a powerful performance enhancing technology in wireless communications. However in realistic systems, when increasing the number of antennas in a restricted space, the capacity gain of MIMO is limited. Furthermore, co-located MIMO (C-MIMO) systems when affected by shadowing can not improve link quality. This motivates us to investigate Distributed MIMO (D-MIMO) system.This work considers a bit level combining scheme, aided by bit reliability information for an uplink D-MIMO system over a composite Rayleigh-lognormal fading channel. Bit reliability information is derived based on the logarithmic likelihood ratio (LLR) and further modified for the MIMO detection schemes: SD-ML (Sphere Decoding - Maximum Likelihood) and MMSE-OSIC (Minimum Mean Square Error - Ordered Successive Interference Cancellation). Computer simulation results demonstrate that such bit level combining scheme provides significant performance improvements for D-MIMO with M transmit and L receive antennas on each of its N geographically dispersed receive node, over conventional C-MIMO with M transmit and L receive antennas, even in the presence of channel estimation errors or channel spatial correlation. It is found that such a D-MIMO system provides a comparable performance to a C-MIMO system with M transmit and NL receive antennas, especially when space correlation becomes significant.Furthermore, an analytical BER evaluation technique is proposed for a C-MIMO system with SD-ML detection over a composite Rayleigh-lognormal fading channel with and without spatial correlation. Numerical results show that our technique provides tight approximations for C-MIMO over space uncorrelated, space semi-correlated and space correlated channels.We also provide a theoretical BER approximation technique for a D-MIMO system with SD-ML detection over a composite Rayleigh-lognormal fading channel with and without spatial correlation. Numerical results show that by optimizing two parameters, the BER approximation techinique provides good approximation for an uncorrelated D-MIMO when the number of transmit antennas equals the number of receive antennas on each of its N geographically dispersed receive node. We further notice that these optimized parameters set for an uncorrelated D-MIMO with equal number of transmit and receive antennas on each receive node can not provide good approximation for an uncorrelated D-MIMO when the number of transmit antennas is less than the number of receive antennas on each node. These analytical results confirm the significant performance improvement provided by D-MIMO with bit level combining.
Les techniques de transmission MIMO (Multiple-Input Multiple-Output) constituent une puissante technologie permettant des améliorations significatives en termes de performance dans le domaine des communications sans fil. Cependant, en pratique, lorsque le nombre d'antennes augmente dans un espace relativement restreint, le gain en capacité des systèmes MIMO est limité. De plus, lorsqu'ils sont affectés par l'effet d'ombrage, les systèmes MIMO co-localisés (C-MIMO) ne peuvent améliorer la qualité de transmission. Ces difficultés ont motivé notre investigation des systèmes D-MIMO (Distributed-MIMO).Cette thèse considère une méthode de combinaison au niveau du bit, utilisant l'information sur la fiabilité du bit, pour le canal montant d'un système D-MIMO subissant des évanouissements de Rayleigh-lognormale. L'information sur la fiabilité du bit est établie à partir de la fonction de vraisemblance logarithmique (LLR) et est par la suite modifiée pour différentes méthodes de détection pour les sysèmes MIMO, incluant SD-ML (Sphere Decoding-Maximum Likelihood) et MMSE-OSIC (Minimum Mean Square Error-Ordered Successive Interference Cancellation). Les résultats des simulations par ordinateur démontrent que comparée à un C-MIMO conventionnel utilisant M antennes d'émission et L antennes de réception, la méthode de combinaison au niveau du bit fournit des améliorations de performance significatives pour le D-MIMO utilisant M antennes d'émission et L antennes de réception sur chacun des N nœuds de réception géographiquement dispersés, et ceci même en présence d'erreurs d'estimation de canal ou de corrélation spatiale. Il est aussi démontré qu'un tel D-MIMO fournit une performance comparable à celle d'un C-MIMO avec M antennes d'émission et NL antennes de réception, surtout lorsque la corrélation spatiale est significative. De plus, une technique d'évaluation analytique de la probabilité d'error est proposée pour un système C-MIMO utilisant SD-ML comme méthode de détection sur un canal à évanouissements composites de Rayleigh-lognormale avec ou sans corrélation spatiale. Les résultats numériques montrent que notre technique fournit de très bonnes approximations pour un système C-MIMO avec ou sans corrélation spatiale.Nous présentons également une technique d'approximation théorique de la probabilité d'error pour un système D-MIMO utilisant SD-ML comme méthode de détection sur un canal à évanouissements composites de Rayleigh-lognormale avec ou sans corrélation spatiale. Les résultats numériques montrent qu'en optimisant deux paramètres, notre technique d'approximation de la probabilité d'error fournit une bonne approximation pour le D-MIMO sans corrélation spatiale utilisant M antennes d'émission et L antennes de réception sur chacun de ses N nœuds de réception géographiquement dispersés. Cependant les valeurs optimales des paramètres obtenues pour un D-MIMO où le nombre d'antennes d'émission est égal au nombre d'antennes de réception à chacun des nœuds de réception, ne peuvent fournir de bonnes approximations lorsque le nombre d'antennes d'émission est inférieur au nombre d'antennes de réception sur chacun des nœuds de réception. Ces résultats analytiques confirment l'amélioration significative de performance fournie par le D-MIMO utilisant la méthode de combinaison au niveau du bit.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Jing. "Advanced channel coding techniques using bit-level soft information." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1923.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Srinivasan, Sudarshan Kumar. "Efficient Verification of Bit-Level Pipelined Machines Using Refinement." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19815.

Full text
Abstract:
Functional verification is a critical problem facing the semiconductor industry: hardware designs are extremely complex and highly optimized, and even a single bug in deployed systems can cost more than $10 billion. We focus on the verification of pipelining, a key optimization that appears extensively in hardware systems such as microprocessors, multicore systems, and cache coherence protocols. Existing techniques for verifying pipelined machines either consume excessive amounts of time, effort, and resources, or are not applicable at the bit-level, the level of abstraction at which commercial systems are designed and functionally verified. We present a highly automated, efficient, compositional, and scalable refinement-based approach for the verification of bit-level pipelined machines. Our contributions include: (1) A complete compositional reasoning framework based on refinement. Our notion of refinement guarantees that pipelined machines satisfy the same safety and liveness properties as their instruction set architectures. In addition, our compositional framework can be used to decompose correctness proofs into smaller, more manageable pieces, leading to drastic reductions in verification times and a high-degree of scalability. (2) The development of ACL2-SMT, a verification system that integrates the popular ACL2 theorem prover (winner of the 2005 ACM Software System Award) with decision procedures. ACL2-SMT allows us to seamlessly take advantage of the two main approaches to hardware verification: theorem proving and decision procedures. (3) A proof methodology based on our compositional reasoning framework and ACL2-SMT that allows us to reduce the bit-level verification problem to a sequence of highly automated proof steps. (4) A collection of general-purpose refinement maps, functions that relate pipelined machine states to instruction set architecture states. These refinement maps provide more flexibility and lead to increased verification efficiency. The effectiveness of our approach is demonstrated by verifying various pipelined machine models, including a bit-level, Intel XScale inspired processor that implements 593 instructions and includes features such as branch prediction, precise exceptions, and predicated instruction execution.
APA, Harvard, Vancouver, ISO, and other styles
4

Wentzlaff, David 1979. "Architectural implications of bit-level computation in communication applications." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lai, Jiunn-Yiing 1958. "RTL AND SWITCH-LEVEL SIMULATION COMPARISON ON EIGHT BIT MICROPROCESSOR." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276532.

Full text
Abstract:
In this research, an AHPL (A Hardware Programming Language) based automation system is used to design and verify the Intel-8080 microprocessor from the RTL (Register Transfer Level) hardware description through the network list of transistors. The HPSIM is used as a RTL simulator which interprets the AHPL description and executes the connections, branches, and register transfer, and prints line or register values for each circuit clock period. After the AHPL description has been translated to switch-level link list, ESIM is applied for more detailed simulation to ensure the digital behavior in this microprocessor design is correct. The ESIM is an event-driven switch-level simulator which accepts commands from the user, and executes each command before reading the next one. After performing these different levels of simulations, a comparison is discussed at the end.
APA, Harvard, Vancouver, ISO, and other styles
6

Stierstorfer, Clemens [Verfasser]. "A Bit-Level-Based Approach to Coded Multicarrier Transmission / Clemens Stierstorfer." Aachen : Shaker, 2010. http://d-nb.info/1084535459/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Oh, Min-seok. "Low complexity bit-level soft-decision decoding for Reed-Solomon codes." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/842687/.

Full text
Abstract:
Reed-Solomon codes (RS codes) are an important method for achieving error-correction in communication and storage systems. However, it has proved difficult to find a soft-decision decoding method which has low complexity. Moreover, in some previous soft-decision decoding approaches, bit-level soft-decision information could not be employed fully. Even though RS codes have powerful error correction capability, this is a critical shortcoming. This thesis presents bit-level soft-decision decoding schemes for RS codes. The aim is to design a low complexity sequential decoding method based on bit-level soft- decision information approaching maximum likelihood performance. Firstly a trellis decoding scheme which gives easy implementation is introduced, since the soft-decision information can be used directly. In order to allow bit-level soft-decision, a binary equivalent code is introduced and Wolf's method is used to construct the binary-trellis from a systematic parity check matrix. Secondly, the Fano sequential decoding method is chosen, which is sub-optimal and adaptable to channel conditions. This method does not need a large amount of storage to perform an efficient trellis search. The Fano algorithm is then modified to improve the error correcting performance. Finally, further methods of complexity reduction are presented without loss of decoding performance, based on reliability-first search decoding using permutation groups for RS codes. Compared with the decoder without permutation, those schemes give a large complexity reduction and performance improvement approaching near maximum likelihood performance. In this thesis, three types of permutation, cyclic, squaring and hybrid permutation, are presented and the decoding methods using them are implemented.
APA, Harvard, Vancouver, ISO, and other styles
8

Shah, Milap. "Parallel Aes diffusion inter block diffusion at bit level and compression." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42449.

Full text
Abstract:
Information is an intelligent data through which knowledgeable and usable things can be convicted or interpreted in a proper manner. With the advancement of technology, transmission of information over the network has come a trend. This information must be transmitted securely over the network. Data security was not a problem if a secure channel was provided for single transmission. It is a necessity to convert the information into an unintelligible form for transmitting it over an unsecured channel. Encryption is a technique through which original information can be converted into unintelligible form. As time has elapsed, various encryption algorithms are employed so that information can be transmitted securely over an unsecured channel. Unless an intruder accesses the encrypted text, he / she cannot gain any information from that text. But as the new algorithms are designed, all the algorithms are challenged and their cryptanalysis is available. In the year 1998, Advanced Encryption Standards (A (S)) were proposed and later it was widely accepted as the most secure encryption algorithm that can be used to encrypt the information so that it can be transmitted securely and unsecured. fixed to a new scheme called Parallel AЕS, was an employee who takes four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text thus providing diffusion of blocks at exchange. than all sequential AЕs. All the algorithms are challenged and their cryptanalysis is available. In the year 1998, To make A morS more fixed to a new scheme called Parallel AЕS, was an employee who took four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text, thus providing diffusion of blocks at exchange. By doing this parallel A stoodS stood to be much firmer than sequential AЕS. Advanced Encryption Standards (AЕS) was proposed and later it was widely accepted as the most secure encryption algorithm that can be used to encrypt the information so that it can be transmitted securely over an unsecured channel. To make A morS more fixed to a new scheme called Parallel AЕS, was an employee who took four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text, thus providing diffusion of blocks at exchange. By doing this parallel A stoodS stood to be much firmer than sequential AЕS. Advanced Encryption Standards (AЕS) was proposed and later it was widely accepted as the most secure encryption algorithm that can be used to encrypt the information so that it can be transmitted securely over an unsecured channel. To make A morS more fixed to a new scheme called Parallel AЕS, was an employee who took four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text, thus providing diffusion of blocks at exchange. By doing this parallel A stoodS stood to be much firmer than sequential AЕS. was an employee who took four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text, thus providing diffusion of blocks at exchange. By doing this parallel A stoodS stood to be much firmer than sequential AЕS. was an employee who took four blocks of 16 bytes at a time to generate four blocks of 16 bytes of text, thus providing diffusion of blocks at exchange. By doing this parallel A stoodS stood to be much firmer than sequential AЕS.
APA, Harvard, Vancouver, ISO, and other styles
9

González, Julio E. "A study of gray level recording capability for a reflective six bit desktop scanner /." Online version of thesis, 1994. http://hdl.handle.net/1850/11289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Satheesh, Varma Nikhil. "Design and implementation of an approximate full adder and its use in FIR filters." Thesis, Linköpings universitet, Elektroniksystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-89430.

Full text
Abstract:
Implementation of the polyphase decomposed FIR filter structure involves two steps; the generation of the partial products and the efficient reduction of the generated partial products. The partial products are generated by a constant multiplication of the filter coefficients with the input data and the reduction of the partial products is done by building a pipelined adder tree using FAs and HAs. To improve the speed and to reduce the complexity of the reduction tree a4:2 counter is introduced into the reduction tree. The reduction tree is designed using a bit-level optimized ILP problem which has the objective function to minimize the overall cost of the hardware used. For this purpose the layout design for a 4:2 counter has been developed and the cost function has been derived by comparing the complexity of the design against a standard FA design. The layout design for a 4:2 counter is implemented in a 65nm process using static CMOS logic style and DPL style. The average power consumption drawn from a 1V power supply, for the static CMOS design was found to be 16.8μWand for the DPL style it was 12.51μW. The worst case rise or fall time for the DPL logic was 350ps and for the static CMOS logic design it was found to be 260ps. The usage of the 4:2 counter in the reduction tree infused errors into the filter response, but it helped to reduce the number of pipeline stages and also to improve the speed of the partial product reduction.
APA, Harvard, Vancouver, ISO, and other styles
11

Fang, Gloria(Gloria Yu Liang). "Instruction-level power consumption simulator for modeling simple timing and power side channels in a 32-bit RISC-V micro-processor." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130686.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 139-140).
We create a Python based RISC-V simulator that is capable of simulating any assembly code written in RISC-V, and even perform simple power analysis of RISC-V designs. The power consumption of non-privileged RISC-V RV32IM instructions are measured experimentally, forming the basis for our simulator. These instructions include memory loads and stores, PC jumps and branches, as well as arithmetic instructions with register values. The object-oriented simulator also supports stepping and debugging. In the context of designing software for hardware use, the simulator helps assess vulnerability to side channel attacks by accepting input power consumption values. The power consumption graph of any disassembled RISC-V code can be obtained if the power consumption of each instruction is given as an input; then, from the output power consumption waveforms, we can assess how vulnerable a system is to side channel attacks. Because the power values can be customized based on what's experimentally measured, this means that our simulator can be applied to any disassembled code and to any system as long as the input power consumption of each instruction is supplied. Finally, we demonstrate an example application of the simulator on a pseudorandom function for simple side channel power analysis.
by Gloria (Yu Liang) Fang.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
12

Taylor, Elvin Lattis Jr. "Modeling and Simulation of a Video-on-Demand Network Implementing Adaptive Source-Level Control and Relative Rate Marking Flow Control for the Available Bit Rate Service." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/31097.

Full text
Abstract:
The Available Bit Rate (ABR) service class for the Asynchronous Transfer Mode (ATM) protocol was originally designed to manage data traffic. ABR flow control makes no guarantees concerning cell transfer delay or cell delay variation. A closed-loop feedback mechanism is used for traffic management. To use this class of service for video transport, the video source will accept feedback from the network and adapt its source rate based on this status information. The objective of this research is to assess the ability of the ATM ABR service class to deliver Moving Picture Experts Group version 1 (MPEG-1) video. Three approaches to source-level control are compared: (i) arbitrary loss or no control method, (ii) selective discard of MPEG B-pictures, and (iii) selective discard of MPEG B- and P-pictures. Performance is evaluated based on end-to-end delay, congested queue occupancy levels, network utilization, and jitter. A description of the investigation, assumptions, limitations, and results of the simulation study are included.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
13

Law, Eugene. "OPTIMIZED LOW BIT RATE PCM/FM TELEMETRY WITH WIDE IF BANDWIDTHS." International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/605602.

Full text
Abstract:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
This paper will present the results of some experiments with non-coherent, single symbol detection of pulse code modulation (PCM)/frequency modulation (FM) where the receiver intermediate frequency (IF) bandwidth is much wider than the bit rate. The experiments involved varying the peak deviation and measuring the bit error probability (BEP) at various signal energy per bit to noise power spectral density ratios (E(b)/N(o)). The experiments showed that the optimum peak-to-peak deviation was about 0.7 to 0.8 times the –3 dB IF bandwidth and that the E(b)/N(o) required for a given BEP increased as the ratio of IF bandwidth to bit rate increased. Further, bi-phase-level/FM performed slightly better than non-return-to-zero-level (NRZ-L)/FM with an ac coupled RF signal generator and IF bandwidths much wider than the bit rate.
APA, Harvard, Vancouver, ISO, and other styles
14

Susanto, Misfa. "Network Coding for Multihop Wireless Networks: Joint Random Linear Network Coding and Forward Error Correction with Interleaving for Multihop Wireless Networks." Thesis, University of Bradford, 2015. http://hdl.handle.net/10454/14864.

Full text
Abstract:
Optimising the throughput performance for wireless networks is one of the challenging tasks in the objectives of communication engineering, since wireless channels are prone to errors due to path losses, random noise, and fading phenomena. The transmission errors will be worse in a multihop scenario due to its accumulative effects. Network Coding (NC) is an elegant technique to improve the throughput performance of a communication network. There is the fact that the bit error rates over one modulation symbol of 16- and higher order- Quadrature Amplitude Modulation (QAM) scheme follow a certain pattern. The Scattered Random Network Coding (SRNC) system was proposed in the literature to exploit the error pattern of 16-QAM by using bit-scattering to improve the throughput of multihop network to which is being applied the Random Linear Network Coding (RLNC). This thesis aims to improve further the SRNC system by using Forward Error Correction (FEC) code; the proposed system is called Joint RLNC and FEC with interleaving. The first proposed system (System-I) uses Convolutional Code (CC) FEC. The performances analysis of System-I with various CC rates of 1/2, 1/3, 1/4, 1/6, and 1/8 was carried out using the developed simulation tools in MATLAB and compared to two benchmark systems: SRNC system (System-II) and RLNC system (System- III). The second proposed system (System-IV) uses Reed-Solomon (RS) FEC code. Performance evaluation of System IV was carried out and compared to three systems; System-I with 1/2 CC rate, System-II, and System-III. All simulations were carried out over three possible channel environments: 1) AWGN channel, 2) a Rayleigh fading channel, and 3) a Rician fading channel, where both fading channels are in series with the AWGN channel. The simulation results show that the proposed system improves the SRNC system. How much improvement gain can be achieved depends on the FEC type used and the channel environment.
APA, Harvard, Vancouver, ISO, and other styles
15

Raimondo, Gianluca. "Particle mixing, two level systems and gauge theory." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/1304.

Full text
Abstract:
2010 - 2011
In this thesis I will discuss the theory of two level systems and the theory of the oscillating particles in quantum eld theory. In the rst chapter I will consider the time evolution of a two level system, a qubit, to show that it has inside a local in time gauge invariant evolution equation. I construct the covariant derivative operator and show that it is related to the free energy. The gauge invariance of the time evolution of the two level system is analogous to the phenomenon of birefringence.I also show that the two level systems present a Berry-like and an Anandan-Aharonov phase. Finally, I discuss entropy environment e ects and the distance in projective Hilbert space between two level states to show that the last one is properly related to the Aharonov - Anandan phase. In the second chapter I review the result obtained in QFT for particle mix- ing, analyzing the theoretical construction and the oscillation formula in the fermion case.I will emphasize the di erences between the quantum mechan- ics formulas and the QFT formulas.The unitary inequivalence between the avor and the mass eigenstates is also shown and the structure of the current for charged elds is nally discussed. I found a non - perturbative vacuum structure for the mixing particles that, among the other things, will lead to a non zero contribution to the value of the cosmological constant (chapter 3). Several links between rst and second chapter will arise from this thesis and will shed the light on the fact that it is possible to construct a generic two level quantum eld theory, that is an extension of the quantum mechanics bit theory in a quantum eld theory framework. [edited by author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
16

Clementson, Martin, and John Augustsson. "User Study of Quantized MIP Level Data In Normal Mapping Techniques." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14919.

Full text
Abstract:
The standard MIP mapping technique halves the resolution of textures for each level of the MIP chain. In this thesis the bits per pixel(bpp) is reduced as well. Normal maps are generally used with MIP maps, and todays industry standard for these are usually 24 bpp.The reduction is simulated as there is currently no support for the lower bpp in GPU hardware. Objectives: To render images of normal mapped objects with decreasing bpp for each level in a MIP chain and evaluate these against the standard MIP mapping technique using a subjective user study and an objective image comparison method. Methods: A custom software is implemented to render the images with quantized normal maps manually placed in a MIP chain. For the subjective experiment a 2AFC test is used, and the objective part consists of a PDIFF test for the images. Results: The results indicate that as the MIP level is increased and the bpp is lowered, users can increasingly see a difference. Conclusions: The results show that participants can see a difference as the bpp is reduced, which indicates normal mapping as not suitable for this method, however further study is required before this technique can be dismissed as an applicable method
APA, Harvard, Vancouver, ISO, and other styles
17

Weichselbaum, Andreas. "Nanoscale Quantum Dynamics and Electrostatic Coupling." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1091115085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Liang. "Development of a CMOS pixel sensor for the outer layers of the ILC vertex detector." Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-01068494.

Full text
Abstract:
This work deals with the design of a CMOS pixel sensor prototype (called MIMOSA 31) for the outer layers of the International Linear Collider (ILC) vertex detector. CMOS pixel sensors (CPS) also called monolithic active pixel sensors (MAPS) have demonstrated attractive performance towards the requirements of the vertex detector of the future linear collider. MIMOSA 31developed at IPHC-Strasbourg is the first pixel sensor integrated with 4-bit column-level ADC for the outer layers. It is composed of a matrix of 64 rows and 48 columns. The pixel concept combines in-pixel amplification with a correlated double sampling (CDS) operation in order to reduce the temporal and fixed pattern noise (FPN). At the bottom of the pixel array, each column is terminated with an analog to digital converter (ADC). The self-triggered ADC accommodating the pixel readout in a rolling shutter mode completes the conversion by performing a multi-bit/step approximation. The ADC design was optimized for power saving at sampling frequency. Accounting the fact that in the outer layers of the ILC vertex detector, the hit density is inthe order of a few per thousand, this ADC works in two modes: active mode and inactive mode. This thesis presents the details of the prototype chip and its laboratory test results.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Trung-Hiên. "Theoretical and experimental study of optical solutions for analog-to-digital conversion of high bit-rate signals." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S110/document.

Full text
Abstract:
Les formats de modulation bidimensionnels (i.e. basés sur l’amplitude et la phase de l’onde porteuse) ont gagné depuis peu le domaine des transmissions par fibre optique grâce aux progrès conjoints de l’électronique rapide et du traitement du signal, indispensables pour réaliser les récepteurs opto-électroniques utilisant la détection cohérente des signaux optiques. Pour pallier les limites actuelles en rapidité de commutation des circuits intégrés électroniques, une voie de recherche a été ouverte il y a quelques années, consistant à utiliser des technologies optiques pour faciliter la parallélisation du traitement du signal, notamment dans l’étape d’échantillonnage ultra-rapide du signal rendu possible par des horloges optiques très performantes. Le thème principal de cette thèse concerne l’étude théorique et expérimentale de la fonction de conversion analogique-numérique (ADC) de signaux optiques par un récepteur opto-électronique cohérent, associant les étapes d’échantillonnage optique linéaire, de conversion analogique-numérique et de traitement du signal. Un prototype, utilisant une solution originale pour la source d’échantillonnage, est modélisé, réalisé et caractérisé, permettant la reconstruction temporelle de signaux optiques modulés selon divers formats : NRZ, QPSK, 16-QAM. Les limitations optiques et électroniques du système sont analysées, notamment l’impact sur la reconstruction des signaux de divers paramètres : le taux d’extinction de la source optique, les paramètres de l’ADC (bande passante BW, temps d’intégration et nombre effectif de bits ENOB). Par ailleurs, de nouveaux algorithmes de traitement du signal sont proposés dans le cadre de la transmission optique cohérente à haut débit utilisant des formats de modulation bidimensionnels (amplitude et phase) : deux solutions sont proposées pour la compensation du déséquilibre de quadrature IQ dans les transmissions mono-porteuses: une méthode originale de l’estimation du maximum du rapport signal sur bruit ainsi qu’une nouvelle structure de compensation et d’égalisation conjointes; ces deux méthodes sont validées expérimentalement et numériquement avec un signal 16-QAM. Par ailleurs, une solution améliorée de récupération de porteuse (décalage de fréquence et estimation de la phase), basée sur une décomposition harmonique circulaire de la fonction de maximum de vraisemblance logarithmique, est validée numériquement pour la première fois dans le contexte des transmissions optiques (jusqu’à une modulation de 128-QAM). Enfin les outils développés dans ce travail ont finalement permis la démonstration d’une transmission sur 100 km d’un signal QPSK à 10 Gbaud fortement limité par un bruit de phase non linéaire et régénéré optiquement à l’aide d’un limiteur de puissance préservant la phase basé sur une nanocavité de cristal photonique
Bi-dimensional modulation formats based on amplitude and phase signal modulation, are now commonly used in optical communications thanks to breakthroughs in the field of electronic and digital signal processing (DSP) required in coherent optical receivers. Photonic solutions could compensate for nowadays limitations of electrical circuits bandwidth by facilitating the signal processing parallelization. Photonic is particularly interesting for signal sampling thanks to available stable optical clocks. The heart of the present work concerns analog-to-digital conversion (ADC) as a key element in coherent detection. A prototype of linear optical sampling using an original solution for the optical sampling source, is built and validated with the successful equivalent time reconstruction of NRZ, QPSK and 16-QAM signals. Some optical and electrical limitations of the system are experimentally and numerically analyzed, notably the extinction ratio of the optical source or the ADC parameters (bandwidth, integration time, effective number of bits ENOB). Moreover, some new DSPs tools are developed for optical transmission using bi-dimensional modulation formats (amplitude and phase). Two solutions are proposed for IQ quadrature imbalance compensation in single carrier optical coherent transmission: an original method of maximum signal-to-noise ratio estimation (MSEM) and a new structure for joint compensation and equalization; these methods are experimentally and numerically validated with 16-QAM signals. Moreover, an improved solution for carrier recovery (frequency offset and phase estimation) based on a circular harmonic expansion of a maximum loglikelihood function is studied for the first time in the context of optical telecommunications. This solution which can operate with any kind of bi-dimensional modulation format signal is numerically validated up to 128-QAM. All the DSP tools developed in this work are finally used in a demonstration of a 10 Gbaud QPSK 100 km transmission experiment, featuring a strong non-linear phase noise limitation and regenerated using a phase preserving and power limiting function based on a photonic crystal nanocavity
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Tian-Sheuan, and 張添烜. "VLSI ARCHITECTURE DESIGN FOR BIT-LEVEL INNER PRODUCT." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/68111050957313296182.

Full text
Abstract:
博士
國立交通大學
電子工程系
88
Inner product is an important building block to many DSP applications such as multimedia, wireless and communication systems. Due to the wide range of applications, the study on efficient implementations to meet different application requirements becomes an important research topic. In this dissertation, we study this topic by exploring the bit-level design space of inner product, including both programmable and non-programmable operands. For non-programmable inner product, we explore its design space by considering the constant and the numerical property of the fixed operands such that the resulting multiplication is a hardwired one with common subexpression sharing. Thus, we propose a new distributed arithmetic (DA) technique that expands the fixed input into bit level so that we can take advantage of shared partial sum-of-products and sparse nonzero bits in the fixed input to reduce the number of computations. The proposed DA has been applied to a 2-D IDCT chip design, a processor core design, and FPGA implementations. The processor core design, which can be used in digital still camera and real time H.263 encoding, explores the sharing properties of the proposed DA to the extreme case: only one word adder and shifter. Furthermore, it may combine the fast direct 2-D DCT algorithm to reduce the computation cycles. The resulting architecture is quite simple, regular and easily scalable to other higher throughput applications. For FPGA implementations, due to its bit level grain size, the design with well-suited proposed DA can offer savings in excess of two-thirds of hardware cost, when comparing with the design by using conventional DA. Besides architecture optimization with common subexpression sharing, we also consider the algorithm reformulation. The algorithm reformulation formulates transform equations into cyclic convolution form to enable better sharing with common subexpression. We have proposed two efficient DFT designs that also combine the symmetry property of DFT coefficients to increase the resulting throughput. The prime-length DFT design can save 80% of gate area with two-times fast of throughput for length N=61. The power-of-two length DFT design achieves competitive area-time complexity comparing with previous designs. For portable applications, we also consider low power filter realization by using differential coefficients and inputs instead of using them directly such that fewer bits are required thereby reducing the size of arithmetic units and power dissipation. We present an improved algorithm to effectively generate differential coefficients so that the differential coefficients methods can be applied to full bandwidth of filters instead of only narrow band filters in previous approaches. Simulations with fixed coefficient filters indicate reduction in transition activity ranging from 1% to 53% over the full range of filter bandwidths. Reduction in area can be up to 50% due to less coefficient precision. The resulting design is superior to the one with previous approaches in applicability, power consumption, and area. For programmable filters, we present a digit-serial architecture that uses DA form in the algorithm level for accumulation-free operations, and (p, q) compressor instead of Booth encoding for high-speed operations. The resulting design can save up to 17% hardware cost comparing with the previous approach.
APA, Harvard, Vancouver, ISO, and other styles
21

LIN, JIE-HE, and 林金禾. "A bit level systolic array for median filters." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/71909654458445663453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Liao, Yu-Li, and 廖芸立. "TMN8 Macroblock Level Quantizer Adjustment under Low Bit Rate." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/19647036560494476878.

Full text
Abstract:
碩士
國立交通大學
電信工程系
90
The bit allocation efficiency becomes increasingly significant in low bit rate video. In this work, we focus mainly on how to reduce header bit redundancy and allot bits to macroblocks that have rich luminance and chrominance information. It is achieved by adjusting the quantization parameter QP for every macroblocks. The variation of QP is highly related to parameter K and C. We find that the QP estimated from weighted averaged K and C will result in larger variation and more empty macroblocks. After studying the rate control method of TMN8, we proposed four estimating methods based on simple linear regression (SLR) schemes to adjust K and C. The results show that SLR I and SLR III can achieve less QP variation and less empty macroblocks.
APA, Harvard, Vancouver, ISO, and other styles
23

HUANG, WEN-QIANG, and 黃文強. "Design of bit-level inner-product array and its applications." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/21962736863841216678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Lian-Ching, and 陳蓮清. "Reinforcement Learning for HEVC/H.265 Frame-level Bit Allocation." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/p75xk6.

Full text
Abstract:
碩士
國立交通大學
多媒體工程研究所
107
Frame-level bit allocation is crucial to video rate control. The problem is often cast as minimizing the distortions of a group of video frames subjective to a rate constraint. When these video frames are related through inter-frame prediction, the bit allocation for different frames exhibits dependency. To address such dependency, this thesis introduces reinforcement learning. We first consider frame-level texture complexity and bit balance as a state signal, define the bit allocation for each frame as an action, and compute the negative frame-level distortion as an immediate reward signal. We then train a neural network to be our agent, which observes the state to allocate bits to each frame in order to maximize cumulative reward. As compared to the rate control scheme in x265-2.7, our method has smaller bit rate fluctuations. However, the coding performance of our model still has room for improvement.
APA, Harvard, Vancouver, ISO, and other styles
25

Halbuto��ullar��, Alper. "Fast bit-level, word-level and parallel arithmetic in finite fields for elliptic curve cryptosystems." Thesis, 1998. http://hdl.handle.net/1957/33546.

Full text
Abstract:
Computer and network security has recently become a popular subject due to the explosive growth of the Internet and the migration of commerce practices to the electronic medium. Thus the authenticity and privacy of the information transmitted and the data stored on networked computers is of utmost importance. The deployment of network security procedures requires the implementation of cryptographic functions. More specifically, these include encryption, decryption, authentication, digital signature algorithms and message-digest functions. Performance has always been the most critical characteristic of a cryptographic function, which determines its effectiveness. In this thesis, we concentrate on developing high-speed algorithms and architectures for number theoretic cryptosystems. Our work is mainly focused on implementing elliptic curve cryptosystems efficiently, which requires space- and time-efficient implementations of arithmetic operations over finite fields. We introduce new methods for arithmetic operations over finite fields. Methodologies such as precomputation, residue number system representation, and parallel computation are adopted to obtain efficient algorithms that are applicable on a variety of cryptographic systems and subsystems. Since arithmetic operations in finite fields also have applications in coding theory and computer algebra, the methods proposed in this thesis are applicable to these applications as well.
Graduation date: 1999
APA, Harvard, Vancouver, ISO, and other styles
26

Yeh, Tien-Yu, and 葉天煜. "Research of Reducing Bit Error Rate by Gray Level Weighting Method." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/82043063228214877496.

Full text
Abstract:
碩士
國立中央大學
光電科學研究所
97
The purpose in this study is to reduce the error bits caused in the holographic data storage system, and use the RS code of error correction codes to slove the random noise effectively. After using the gray level weighting method, RS code can correct all error bits of reduced coding pages, and get decoded figure without error bits. The gray level weighting method can decrease error bits effectively when optical system has defocus aberration or more serious lens aberration. We can know that using the gray level weighting method can decrease the optical demand of holographic data storage. Utilize Gaussian accumulate probability to get the mean value and standard deviation to describe the Gaussian destribution curve matched the gray level distribution of actual experiment. Calculate the theoretical value of bit error rate of reduced figure. Using the Himax LCoS and common lens to get the optimal theoretical value of Bit Error Rate is 4.19×10^-10。
APA, Harvard, Vancouver, ISO, and other styles
27

Lee, Hwan-Rei, and 李桓瑞. "Efficient VLSI Architectures of Bit-Level FIR Filter Modules and Filter Banks." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/84974788889981615421.

Full text
Abstract:
博士
國立交通大學
電子工程學系
85
FIR filters are important building blocks to many DSP applications.They are widely applied to applications of various sample rates such asaudio, video or image signal processing or signal equalization, etc.Due to the wide range of applications, the study on efficientimplementations to fit for the different application requirementsbecomes an interesting topic. In this dissertation, we study this topicby exploring design spaces of FIR filter modules (singlefilter) and filter banks (multiple filters). For exploring the design spaces of a single filter module, weconsider implementations in bit-level reformulation and theirarchitecture implementation issues. Two architectures will be proposed.One is for programmable FIR filters and the other is for fixedcoefficient ones. The programmable one is derived basing on theModified Booth encoding of input signals. It performs filter operationsin digit-serial and is flexibly pipelinable without incurring extra latency. Comparison results showedthat the resulting architecture takes only half the cost withoutsacrificing performance compared with the architecture proposed in theliterature. The fixed coefficient one is a memory-based architecture.Fixed coefficient properties allow us to pre-calculate some requiredoperations in memory to reduce the required numerical operations. Efficientschemes to store, partition and arrange the pre- calculated results are studied.The resulting architecture is a parameterized one such thatthat it can be tuned to fit for different application specificationrequirements by adjusting the values of the parameters. For exploring design spaces of multiple filters, wepropose hardware sharing methods to allow DSP systems withmultiple filters to share one set of hardware to perform requiredcomputations. The hardware-sharing method modifiesa well-designed architecture to make it ready for performing multiplefilterings. The method preserves the topology of original architecture.Hence, if the original architecture features regular or systolic, the sharedarchitecture will also be regular or systolic.The hardware-sharing method will be applied to a typical multirate system:tree-structured QMF banks. The resulting architecture is a highly sharedone. Two silicon implementations will also be presented in thisdissertation. The first one is an IIR filter bank which performs featureextraction for a low cost speech recognition system. In thisimplementation, we applied the proposed hardware- sharing method to aDA-based IIR filter architecture and make it capable of performingmultiple-levels of multirate filter bank operations. The other one is apair of matched filters for a FSK decoder. We applied architectureimplementation issues discussed in this dissertation and sharedthe hardware to only two six-bit counters. Both systems have been provedfunctional in silicons by UMC Tech. Co., LTD and Sunplus Tech. Co., LTD,respectively.
APA, Harvard, Vancouver, ISO, and other styles
28

Wentzlaff, David, and Anant Agarwal. "A Quantitative Comparison of Reconfigurable, Tiled, and Conventional Architectures on Bit-level Computation." 2004. http://hdl.handle.net/1721.1/30459.

Full text
Abstract:
General purpose computing architectures are being called on to work on amore diverse application mix every day. This has been fueled by the needfor reduced time to market and economies of scale that are the hallmarksof software on general purpose microprocessors. As this application mixexpands, application domains such as bit-level computation, which hasprimarily been the domain of ASICs and FPGAs, will need to be effectivelyhandled by general purpose hardware. Examples of bit-level applicationsinclude Ethernet framing, forward error correction encoding/decoding, andefficient state machine implementation.In this paper we compare how differing computational structures such asASICs, FPGAs, tiled architectures, and superscalar microprocessors areable to compete on bit-level communication applications. A quantitativecomparison in terms of absolute performance and performance per area willbe presented. These results show that although modest gains~(2-3x) inabsolute performance can be achieved when using FPGAs versus tunedmicroprocessor implementations, it is the significantly larger gains~(2-3orders of magnitude) that can be achieved in performance per area thatwill motivate work on supporting bit-level computation in a generalpurpose fashion in the future.
APA, Harvard, Vancouver, ISO, and other styles
29

Stierstorfer, Clemens [Verfasser]. "A bit level based approach to coded multicarrier transmission / vorgelegt von Clemens Stierstorfer." 2009. http://d-nb.info/995836272/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wong, Wei-Zhe, and 翁偉哲. "New Bit-Line-Controlled Multi-Level Programming Schemes for Split-Gate Flash Memory." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/35894041424027775688.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
87
Flash memories belong to the family of non-volatile memories, the first conceptual MIMIS device structure was proposed by D. Kahng and S. M. Sze in 1967. Thank the advancement in semiconductor manufacturing technology, the cost reduction makes Flash memories a superstar in the coming era. The key to compete with other memories is to move toward high density, low power consumption, low cost and high efficiency. The first idea flashes when talking about Flash memories should be the well-known stacked-gate devices. Due to the small size and simple process, it becomes popular in this field. However, there is no denying that another split-gate structure is also influential This thesis will be dedicated to the study of split-gate Flash memories utilizing Source-side Injection operation. Comparing with the stacked-gate Flash memories, it has advantages of high injection efficiency, anti-radiation and over-erase immunity, though at price of larger device area and more complex manufacturing process. This thesis discusses the multi-level programming method to solve the fatal disadvantage of low-density for split-gate Flash memory. After evaluating the device characteristics and surveying the published literatures, the author starts from the ramp-up voltage operation, which has attracted many researchers’ attention. Based on the understanding of operating mechanisms, possible multi-level programming schemes are proposed. Considering the trouble will be encountered by each method, the author finally grades these schemes for adaptability.
APA, Harvard, Vancouver, ISO, and other styles
31

Tan, Chee Seng. "Performance of direct bit detection receivers for multiple level phase-shift-keyed modulation." Thesis, 1986. http://hdl.handle.net/10945/22029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

陳漢臣. "A Novel Bit-level Design Approach and its Application to Discrete Sinusoidal Transforms." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/96054195254972160120.

Full text
Abstract:
博士
國立交通大學
電子工程系所
94
The Discrete Sinusoidal transforms (DSST’s) have been widely used in many digital signal processing applications such as image processing, digital filtering, digital communication, and etc. Although many designs of the DSST’s have been proposed in the literatures, their designs are still not efficient enough since they exploit only the constant property of the transform coefficients without considering the numerical property of these coefficients in the reformulated algorithms to further optimize the hardware cost. This dissertation proposes a novel bit-level hardware-efficient group distributed arithmetic (GDA) design and its applications for DSST’s designs. In the proposed GDA design approach, first we formulate the algorithm of DSST’s into cyclic convolution form in algorithm level. Then we use the distributed arithmetic to decompose the input data into bit-level in architecture level. Thus, the data redundancy due to the cyclic convolution can be efficiently removed within the bit-level input context to facilitate a hardware efficient DA realization. The proposed GDA approach rearranges the contents of DA memory according to its cyclic property such that redundancy of the contents can be eliminated and only a few groups of data are needed. Thus, compared with the conventional DA design, the memory cost of the proposed GDA design can be reduced from to , and accounting with the necessary overhead, the overall complexity is improved from to . To further extend its applications to long length designs, we further combine the Agarwal-Cooley algorithm and Pseudocirculant matrix factorization algorithm. This can partition the long length cyclic convolution into short ones while can still maintain its cyclic property, which avoids the non-cyclic problem of direct partitioning. Thus the proposed GDA design can efficiently be applied to realize each of the shortened cyclic convolution blocks to achieve low hardware cost. The proposed GDA design approach has been applied successfully to the DFT, DHT and DCT designs. For DFT design, we further combine the symmetrical property of the DFT coefficients with the proposed GDA design approach such that this design requires only half the contents to be stored. This further reduces the memory size by a factor of two. For the DCT design, in addition to the symmetry property of DCT coefficients, we further reformulate the non-cyclic DCT kernel into two perfect cyclic forms such that the DCT can be implemented by the GDA design approach with less hardware of (N-1)/2 adders or substractors, one much small memory module, a (N-1)/2-bit barrel shifter, and (N-1)/2+1 accumulators. Compared with the existing systolic array designs and DA-based designs, the realizations of 1-D DFT, DHT, and DCT with the proposed GDA design approach reduce the delay-area product more than 29% according to a 0.35 um CMOS cell library. In addition to the prime length design, we also apply the GDA approach to the long length power-of-two DFT design commonly used in the communication system. We combine the proposed hardware efficient GDA approach with the Cooley-Tukey algorithm on DFT decomposition, and pseudocirculant matrix factorization algorithm on cyclic convolution partitioning to facilitate the long- and variable-length DFT design with low hardware cost. The proposed design can be flexibly used to compute the 1-D 64/128/256/512/1024/2048/4096-point DFT by cascading two 1-D short length DFTs and summing up the partitioned short length cyclic convolutions for each stage of the cascaded DFT. Besides, the proposed hardware efficient design approach can also be adopted in the design with the length beyond power of two. Compared with the existing long-length and variable-length FFT design, in addition to the advantages of short latency and high hardware utilization efficiency, under the same throughput rate, the proposed variable-length DFT can be a competitive design, and save the hardware cost more than 9.6% while the transform length is smaller than 256. In summary, the presented GDA-based design approach provides a solution to efficiently implement not only the DSST’s but also the DSP applications involving convolution and correlation.
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Shin-hung, and 林信宏. "Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/14993402705353780743.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
100
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
APA, Harvard, Vancouver, ISO, and other styles
34

Tseng, Yu-ling, and 曾于玲. "Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/36707251217938740233.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
99
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
APA, Harvard, Vancouver, ISO, and other styles
35

Ma, Yu-Ping, and 馬妤萍. "Half-level cells: enhancing flash read disturb management with adaptive cell-bit density." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/gcjyxw.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
107
Read disturbance is a kind of noise generated by hardware characteristics while executing read operation. Accumulating a large amount of read disturbance may destroy existing data. Read refreshing use data migration to avoid data corruption, however it also induced a lot of erase count under read intensive workload. In-place reprogramming reduces cell-bit density to enhance read disturbance tolerance, but it can only handle a small amount of the read hot data which cause read refreshing when they stored in high-density cell. We further propose to increase block capacity by adjust cell-bit density, to reduce erase count induced by read refreshing.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Qi. "Wideband multi-level QAM MODEM with programmable bit rate and programmable intermediate carrier frequency." Thesis, 2003. http://spectrum.library.concordia.ca/2408/1/MQ91134.pdf.

Full text
Abstract:
Quadrature Amplitude Modulation (QAM) has been predominant in current wireless digital communications, such as satellite and microwave radio communications. Due to diverse wireless communication applications, a digital QAM modem with diverse bit rates (R) and intermediate carry frequency (IF) is of great interest. The main objective of this thesis is to design, analyze and implement a multi-level QAM modem with programmable bit rates and programmable IF, thus not only meeting different requirements, but also dramatically reducing the complexity of the following analog Radio Frequency (RF) block. In the thesis, the digital QAM modem is built based on the fundamentals of Digital Communication Systems (DCS) and Digital Signal Processing (DSP). The underlying relationship between signal resampling and its filter impulse response is studied and analyzed. A new signed-array structure for 2's complement multiplication is introduced in order to improve the regularity and flexibility of hardware design. The modem analysis and comparisons are shown in terms of performances and structures. Hardware design techniques are developed in-depth. The modem model is coded in VHDL (VHSIC Hardware Description Language) and synthesized using Synopsys tools, and its performance is evaluated. The regular structure and portable VHDL coding of the modem can satisfy diverse applications implemented in different FPGAs or ASICs.
APA, Harvard, Vancouver, ISO, and other styles
37

Hsieh, Gui-Bin, and 謝貴彬. "A bit-level systolic array for matrix multiplication and its application to autoassociative memory." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/06369411727618208389.

Full text
Abstract:
碩士
國立成功大學
電機工程研究所
83
In this thesis, a bit-level systolic array by two level pipelining method is proposed to implement the fast algorithm of matrix multiplication. After studting various current algorithms , in order to improve the efficiency and computation speed of every processor, firstly, we utilize the torus array of the standard multiplication algorithm in the word-level pipelining, then make a bit-level pipelining inside each processor. Because the number of processor is dependent on the dimension of the matrix and the number of bits of entry, we design two kinds of architectures for different conditions to reduce the complexity of the hardware. One is adapted to the "larger dimension" condition, the other is adapted to the "larger number of bits" condition. Finally, the above methods are applied in the field of neural network, and a bit-level systolic array for autoassociative memory is designed.
APA, Harvard, Vancouver, ISO, and other styles
38

JianDaLu and 角政達. "A NEW Fast Image Encryption Scheme Based on Bit-Level Permutation and Lookup Table." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/39177770375077704081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kao, Yu-Chi, and 高昱琦. "DESIGN AND IMPLEMENTATION OF HIGH-ORDER MULTI-BIT SIGMA-DELTA MODULATORBY DIGITAL LEVEL CONTROL." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/80145978006462682670.

Full text
Abstract:
碩士
大同大學
通訊工程研究所
92
Because the real world signals are analog and the digital form of analog signals can be processed by using flexible digital-signal-processing (DSP), analog to digital conversion becomes the key component in any electronic system. Although the sigma-delta modulator was first introduced in 1962, it did not gain importance until recent developments in digital VLSI technologies which provide the practical means to implement the large digital signal processing circuitry. The increasing use of digital techniques in communication and audio application has also contributed to the recent interest in cost effective high precision A/D converters. Basically enhance the signal noise ratio of the sigma delta modulator may obtain by several ways: First, raises the oversampling rate, second, increase the number of system order, third, by use of the multi-bit quantizer. Although the most effective way is to increase the system order, however there might have the stability problem of the sigle-loop architecture when the number of system order is longer than two. In this thesis we proposed a design method for a stable high-order and multi-bit single-loop sigma delta ADC which does not require a precision multi-bit DAC in the feedback loop. Within the proposed system, the local digital level (DEL) control is employed to extend the integrator output dynamic range.
APA, Harvard, Vancouver, ISO, and other styles
40

YE, BAO-FU, and 葉寶富. "A high-speed neural analog circuit for computting the bit-level tranform image coding." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/73547221210772784761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Cheng-TaHuang and 黃政達. "An Improved Image Encryption Method Based on Bit-level Permutation and Two-way diffusion." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/65261919309139567675.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
100
Chaotic system is characterized by ergodicity, pseudo-randomness, and sensitivity to initial conditions and control parameters. These properties have a close relationship to cryptography. As a result, secure communications and information encryption based on chaos have been studied widely in the recent years. In this thesis, an improved chaos-based image encryption scheme of Fridrich type is proposed. The proposed scheme is inspired by the Zhu’s and the Ye’s cryptosystems, and some novelties are added to enhance the security and efficiency. The merits of the two approaches are kept but the flaws are remedied. Firstly, a skew tent map is employed to generate chaotic orbits to get two index order sequences for the row-by-row and column-by-column bit-level permutation. And then two pseudo-random sequences are produced by a generalized Arnold map and a generalized Bernoulli shift map for the two-way diffusion process. Simulations of the proposed scheme have been carried out with detail analysis. For the encrypted image, the histogram distribution is flat, the entropy is very close to the theoretical value 8, and the correlation coefficient is approximated to zero. Namely, the encrypted image is similar to a random image in the statistical features. For differential analysis, the average value of NPCR and UACI can reach more than 99.6% and 33.4% in only one round of encryption. For key sensitivity test, the change between two cipher-images using two slightly different keys is over 99.6%. The key space is which is large enough to resist all kinds of brute-force attacks. The average encryption time is 0.021 seconds for a 512x512 256 grayscale image, and it is faster than the Zhu’s 0.061 seconds, and the Ye’s 0.177 seconds cryptosystems. The high performance on security, speed and sensitivity demonstrates that the proposed scheme is a good candidate for practical applications.
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Sisi. "The China-OHADA BIT, a step in the right direction : a new model of China-Africa BIT at a regional or sub-regional level." Thèse, 2018. http://hdl.handle.net/1866/22567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hsu, Wei-Cheng, and 徐維誠. "Design of a CORDIC Function Generator Using Table-Driven Function Evaluation with Bit-Level Truncation." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/58221352228093812272.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
100
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluation methods, piecewise polynomial approximation is the most popular approach which interpolates the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a lookup table ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry, multipliers, and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units in the design of CORDIC processors.
APA, Harvard, Vancouver, ISO, and other styles
44

Keng-HongChu and 朱耿宏. "A 10-bit Segmented Digital-to-Time Converter with 10ps-level Resolution and Automatic Offset Calibration." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/ythaq7.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
102
A 10-bit segmented digital-to-time converter with 10-ps-level resolution and automatic offset calibration is proposed. The segmented architecture (2-bit binary code +8-bit thermometer code) used in digital-to-analog converters (DAC) is adopted in the proposed digital-to-time converter (DTC) to reduce the impact of process variation on linearity. This method is different from conventional DTCs using delay-locked loops (DLL) or phase-locked loops (PLL), both of which have more power consumption. The proposed DTC uses the relative time generation to get high resolution, and the offset calibration circuit is implemented to calibrate the offset error inherent in the relative time generation. The proposed DTC was fabricated by using the TSMC 0.18μm 1P6M mixed-signal process. The resolution of proposed is 10ps-level, and the operational range is 10ns-level. The total power consumption is 19mW. The core area is 0.7mm2.
APA, Harvard, Vancouver, ISO, and other styles
45

TszKwongLeung and 梁子江. "Integrated Method of Matrix Mapping and Multi-level Bit Vector Minimization for Conflict Detection in Distributed Software-Defined Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/bcsnhz.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Ching-Feng, and 吳敬丰. "Implementation of a 10-bit SAR ADC With Merge, Split, and Tri-Level Switching using Mortise-Tenon Structure Capacitor." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/95667p.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
106
This thesis presents a 10-bit 500 KS/s Successive Approximation Register Analog-to-Digital Converter (SAR ADC) for biomedical application with a 0.5 V supply voltage which is implemented by using the TSMC 90nm process technology. The circuit can be applied to the sensors or monitoring device related to the Internet-of-Things (IoT). This design can achieve the purpose of low area, low power and low cost. In order to provide a better performance and the reduced DAC switching energy, the merge and split (MS) switching architecture is implemented. The dynamic latching comparators are used to reduce the static power consumption. A MOM capacitor with a Mortise-Tenon structure is used to reduce the parasitic capacitance effect in a single capacitor and the distributed connected unit capacitors. They can reduce the switching energy consumption in the capacitor array and improve the overlarge area of the capacitor array problem. The simulated results show that the sampling rate can be 500Ks/s, the SNDR is 56.11 dB, and ENOB is 9.028 bit. The DNL and INL are 0.412/-0.435 LSB and 0.590/-0.767 LSB, respectively. The power consumption is 1.5003μW.
APA, Harvard, Vancouver, ISO, and other styles
47

Peng, Wu-Chin, and 彭武欽. "Multi Level and 2-Bit/Cell Operation for SONOS Memory with Wrapped-Select-Gate Structure Using Source-Side Injection Programming." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/41329043321419468944.

Full text
Abstract:
碩士
逢甲大學
電子工程所
94
We have proposed a novel 2-bit/cell Semiconductor-Oxide-Nitride-Oxide- Semiconductor memory device with a wrapped-select-gate structure. Its simple structure, featuring a wrapped-select-gate, provides impressive qualities, which including high-speed program, lager Vth window after P/E cycling, excellent gate disturbance and date retention. In addition, this memory device doesn’t suffer 2-bit interaction issue and lateral migration resulted from its special structure. Programming and erasing operations are performed by source-side injection (SSI) and band-to-band hot hole (BTB HH), respectively. Programming speeds of less than 10 ms, programming currents of less about 50 nA while the select gate bias, word-line bias were 0.5 V and 11 V, respectively, and erasing speeds of 5 ms were achieved. Because of the high performance of this memory device such as high-speed program, lager Vth window and excellent reliability, it is suitable multi-level operation. Therefore, the density of our memory can enable twice increase by utilizing multi-level operation. The 2-bit/cell SONOS memory can become the 4-bit/cell SONOS memory with wrapped-select-gate structure. We have found the mechanism of program operation in this device, and shown the relationship between program efficiency and ONO thickness of the sidewall by simulated results. This SONOS memory process is simple in comparison with other SONOS-type NVM devices, and especially attractive for embedded NVM applications in standard logic CMOS products. Thus, this SONOS memory will be the candidate of future Flash memory technology.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Kuan-Hsun, and 王冠勳. "Design and Implementation of Second-Order ΔΣ Modulator with 3-Bit, 37-Level Pre-Detective Dynamic Quantization for Audio Application." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/99694544988437826258.

Full text
Abstract:
碩士
國立臺灣師範大學
應用電子科技學系
101
In this paper, a high-resolution delta-sigma modulator with a pre-detective dynamic quantizer is proposed. A 37-level quantization can be achieved by using only a 3-bit quantizer in the proposed dynamic quantizer. In the proposed structure, a signal detector is added at the input of the presented modulator to pre-detect the magnitude of the sampled input and switch the dynamic quantizer to the corresponding quantization range. With the proposed technique, the quantization level can be greatly increased, and the number of comparators will hence be substantially reduced for a high-level quantization. The resulting resolution of delta-sigma modulators can thus be significantly promoted without consuming much power and area. The proposed delta-sigma modulator is implemented in a TSMC 0.18 μm 1P6M CMOS process. The signal-to-noise plus distortion ratio is 101.2 dB and dynamic range is 102 dB in a signal band of 25 kHz. The power consumption is 1.68 mW at a 1.8 V supply voltage. A dynamic element matching with noise shaping technique in delta-sigma modulator is proposed. The proposed structure can shape the feedback noise to high frequency just like the delta sigma modulator shape the quantization noise. The architecture can substantially reduce the in-band noise and get much better performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Ameer, Abdul Kader Mohamed Basith Abdul. "Functional Verification of Arithmetic Circuits using Linear Algebra Methods." 2011. https://scholarworks.umass.edu/theses/657.

Full text
Abstract:
This thesis describes an efficient method for speeding up functional verification of arithmetic circuits namely linear network such as wallace trees, counters using linear algebra techniques. The circuit is represented as a network of half adders, full adders and inverters, and modeled as a system of linear equations. The proof of functional correctness of the design is obtained by computing its algebraic signature using standard linear programming (LP) solver and comparing it with the reference signature provided by the designer. Initial experimental results and comparison with Satisfiability Modulo Theorem (SMT) solvers show that the method is efficient, scalable and applicable to complex arithmetic designs, including large multipliers. It is intended to provide a new front end theory/engine to enhance SMT solvers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography