Thèses sur le sujet « Efficient implementation for HBVM »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Efficient implementation for HBVM.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Efficient implementation for HBVM ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

King, Myron Decker. « An efficient sequential BTRS implementation ». Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46603.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (leaves 73-74).
This thesis describes the implementation of BTRS, a language based on guarded atomic actions (GAA). The input language to the compiler which forms the basis of this work is a hierarchical tree of modules containing state, interface methods, and rules which fire atomically to cause state transitions. Since a schedule need not be specified, the program description is inherently nondeterministic, though the BTRS language does allow the programmer to remove nondeterminism by specifying varying degrees of scheduling constraints. The compiler outputs a (sequential) single-threaded C implementation of the input description, choosing a static schedule which adheres to the input constraints. The resulting work is intended to be used as the starting point for research into efficient software synthesis from guarded atomic actions, and ultimately a hardware inspired programming methodology for writing parallel software. This compiler is currently being used to generate software for a heterogeneous system in which the software and hardware components are both specified in BTRS.
by Myron Decker King.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Patel, Nirav B. « Voronoi diagrams robust and efficient implementation / ». Diss., Online access via UMI:, 2005.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Stenman, Erik. « Efficient Implementation of Concurrent Programming Languages ». Doctoral thesis, Uppsala University, Department of Information Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-2688.

Texte intégral
Résumé :

Dissertation in Computer Science to be publicly examined in Häggsalen, Ångströmlaboratoriet, Uppsala University, on Friday, November 1, 2002 at 1:00 pm for the degree of doctor of philosophy. The examination will be conducted in English.

This thesis proposes and experimentally evaluates techniques for efficient implementation of languages designed for high availability concurrent systems. This experimental evaluation has been done while developing the High Performance Erlang (HiPE) system, a native code compiler for SPARC and x86. The two main goals of the HiPE system are to provide efficient execution of Erlang programs, and to provide a research vehicle for evaluating implementation techniques for concurrent functional programming languages.

The focus of the thesis is the evaluation of two techniques that enable inter-process optimization through dynamic compilation. The first technique is a fast register allocator called linear scan, and the second is a memory architecture where processes share memory.

The main contributions of the thesis are:

An evaluation of linear scan register allocation in a different language setting. In addition the performance of linear scan on the register poor x86 architecture is evaluated for the first time.

A description of three different heap architectures (private heaps, shared heap, and a hybrid of the two), with a systematic investigation of implementation aspects and an extensive discussion on the associated performance trade-offs of the heap architectures. The description is accompanied by an experimental evaluation of the private vs. the shared heap setting.

A novel approach to optimizing a concurrent program, by merging code from a sender with code from a receiver, is presented together with other methods for reducing the overhead of context switching.

A description of the implementation aspects of a complete and robust native code Erlang system, which makes it possible to test compiler optimizations on real world programs.

Styles APA, Harvard, Vancouver, ISO, etc.
4

ALVES, ROGERIO GUEDES. « EFFICIENT MULTI-RATE SYSTEM IMPLEMENTATION FORMS ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1993. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8692@1.

Texte intégral
Résumé :
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Inicialmente apresenta-se a estrutura de um sistema Multi- taxa, faz-se uma revisão teórica do mesmo, e descreve-se seu funcionamento. Posteriormente são apresentadas várias formas de implementar este sistema, como realizá-lo no domínio do tempo, parte no domínio do tempo e parte no domínio da freqüência e realizá-lo no domínio da freqüência. Nestas formas de implementação são considerados fatores como: emprego da técnica de overlap- save ou overlap-add para realização da convolução a ser implementada no sistema, e utilização de filtros do tipo IIR ou FIR para realizar o filtro de reconstrução. Nos sistemas implementados das diversas formas são analisados o atraso e a complexidade comparando-se os resultados teóricos obtidos. Nestas diferentes formas de implementação são mostrados sistemas com particularidades interessantes como: implementação da interpolação no domínio da freqüência e utilização de uma FFT inversa voltada para interpolação e decimação. Finalmente são apresentados resultados práticos dos sistemas realizados, sendo estes avaliados, comparados com os teóricos e comentados.
Firstly, a multi-rate system structure is presented, a theorical review is made and its behavior is described. Afterwards, many forms of its implementation are presented: in time domain, mixed time domain and frequency domain; and in the frequency domain only. In those implementation some factors are taken into account, such as the use of the overlap-save and overlap- add techniques for the convolution, and the utilization of IIR or FIR structures as reconstruction filters. In the system implemented, the delay and complexity are analised, by the comparing to the theorical results. Some interesting results are show, such as the implementation of the frequency domain interpolation and the use of an inverse FFT dedicated to decimation and interpolation. Finally, pratical results of the implemented system are presented. These are evaluated, compared to the theorical ones and commented upon.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Mahdi, Abdul-Hussain Ebrahim. « Efficient generalized transform algorithms for digital implementation ». Thesis, Bangor University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277612.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wojtczak, Dominik. « Recursive probabilistic models : efficient analysis and implementation ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3217.

Texte intégral
Résumé :
This thesis examines Recursive Markov Chains (RMCs), their natural extensions and connection to other models. RMCs can model in a natural way probabilistic procedural programs and other systems that involve recursion and probability. An RMC is a set of ordinary finite state Markov Chains that are allowed to call each other recursively and it describes a potentially infinite, but countable, state ordinary Markov Chain. RMCs generalize in a precise sense several well studied probabilistic models in other domains such as natural language processing (Stochastic Context-Free Grammars), population dynamics (Multi-Type Branching Processes) and in queueing theory (Quasi-Birth-Death processes (QBDs)). In addition, RMCs can be extended to a controlled version called Recursive Markov Decision Processes (RMDPs) and also a game version referred to as Recursive (Simple) Stochastic Games (RSSGs). For analyzing RMCs, RMDPs, RSSGs we devised highly optimized numerical algorithms and implemented them in a tool called PReMo (Probabilistic Recursive Models analyzer). PReMo allows computation of the termination probability and expected termination time of RMCs and QBDs, and a restricted subset of RMDPs and RSSGs. The input models are described by the user in specifically designed simple input languages. Furthermore, in order to analyze the worst and best expected running time of probabilistic recursive programs we study models of RMDPs and RSSGs with positive rewards assigned to each of their transitions and provide new complexity upper and lower bounds of their analysis. We also establish some new connections between our models and models studied in queueing theory. Specifically, we show that (discrete time) QBDs can be described as a special subclass of RMCs and Tree-like QBDs, which are a generalization of QBDs, are equivalent to RMCs in a precise sense. We also prove that for a given QBD we can compute (in the unit cost RAM model) an approximation of its termination probabilities within i bits of precision in time polynomial in the size of the QBD and linear in i. Specifically, we show that we can do this using a decomposed Newton’s method.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Taylor, David Eirik. « Efficient Implementation of Cross-Correlation in Hardware ». Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-25839.

Texte intégral
Résumé :
Low-area matched filter and correlator designs are explored in this thesis, for ADC resolutions of 1- and 2-bits. Correlators are used extensively in spread-spectrum communication technologies, where they serve as a means of detecting a known pseudo-random sequence (PN code). The correlator designs presented here are intended for direct-sequence spread spectrum (DSSS) radio, where the data to be sent is expanded using either the PN code, or the inverse of the PN code. The correlator or matched filter will then respond with a positive or negative peak when a data bit is detected.To test various correlator designs a testbench is developed in MATLAB, where a DSSS data sequence can be created and corrupted with an adjustable level of white Gaussian noise. The data stream with noise is filtered with an automatic gain stage, and sampled using an ADC of variable resolution and sampling rate. The sampled signal is then fed to a mathematical model of the given correlator design to see how it behaves. For an objective measure of performance in the presence of noise, a novel noise immunity test bench was developed, which subjects the correlator models to a signal with increasing levels of noise. The SNR where the correlator is no longer able to extract the correct data bits from the signal is considered the noise immunity level.Several HDL matched filter designs are presented for both 1- and 2-bits of ADC resolution. The 1-bit matched filters are tested using the Barker-11 PN code, whereas the 2-bit correlators are tested using a 36 chip long chirp sequence. For both the 1- and 2-bit correlators, a specific design type using a multiplexed parallel counter was the most area efficient. A novel grouping correlator design is also presented for 2-bit operation, however the area required by this design is larger than that of the other designs. The results from the grouping design indicate that a significant reduction in dynamic power is present. In terms of power efficiency, the dual correlator designs showed promising results of half the power consumption of the other designs. The design of parallel bit counters used in the matched filters are also presented, along with the area per bits required for each design.Verification of the designs is performed using mathematical correlator models, which are subjected to the same input as the Verilog modules. The results from these two tests are compared, and any discrepancies are reported to the user of the testbench. The mathematical and Verilog correlator models are fed with a simulated real-world input signal, which is essentially random noise for purposes of testing functionality.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Abdoel-Gawad, Farag Saleh. « Efficient hardware implementation of the CORDIC algorithm ». Thesis, Liverpool John Moores University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299066.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fan, Yanan. « Efficient implementation of Markov chain Monte Carlo ». Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343307.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jerez, Juan Luis. « Custom optimization algorithms for efficient hardware implementation ». Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/12791.

Texte intégral
Résumé :
The focus is on real-time optimal decision making with application in advanced control systems. These computationally intensive schemes, which involve the repeated solution of (convex) optimization problems within a sampling interval, require more efficient computational methods than currently available for extending their application to highly dynamical systems and setups with resource-constrained embedded computing platforms. A range of techniques are proposed to exploit synergies between digital hardware, numerical analysis and algorithm design. These techniques build on top of parameterisable hardware code generation tools that generate VHDL code describing custom computing architectures for interior-point methods and a range of first-order constrained optimization methods. Since memory limitations are often important in embedded implementations we develop a custom storage scheme for KKT matrices arising in interior-point methods for control, which reduces memory requirements significantly and prevents I/O bandwidth limitations from affecting the performance in our implementations. To take advantage of the trend towards parallel computing architectures and to exploit the special characteristics of our custom architectures we propose several high-level parallel optimal control schemes that can reduce computation time. A novel optimization formulation was devised for reducing the computational effort in solving certain problems independent of the computing platform used. In order to be able to solve optimization problems in fixed-point arithmetic, which is significantly more resource-efficient than floating-point, tailored linear algebra algorithms were developed for solving the linear systems that form the computational bottleneck in many optimization methods. These methods come with guarantees for reliable operation. We also provide finite-precision error analysis for fixed-point implementations of first-order methods that can be used to minimize the use of resources while meeting accuracy specifications. The suggested techniques are demonstrated on several practical examples, including a hardware-in-the-loop setup for optimization-based control of a large airliner.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Sustarsic, Alissa Michele. « An Efficient Implementation of the Transportation Problem ». UNF Digital Commons, 1999. http://digitalcommons.unf.edu/etd/81.

Texte intégral
Résumé :
The transportation problem is a special type of linear program in which the objective is to minimize the total cost of shipping a single commodity from a number of sources (m) to a number of destinations or sinks (n). Because of the special structure of the transportation problem, a special algorithm can be designed to find an optimal solution efficiently. Due to the large amount of information in the problem, judicious storage and management of the data are essential requirements of any viable implementation of the transportation algorithm. Using sparse matrix techniques to store the solution array, and a rooted tree as the labeling method for handling the associated information provides a viable method to solve the transportation problem. A difficult test problem was designed to test the computer program and demonstrate its efficiency. We were able to successfully implement the transportation algorithm for problems involving one million possible shipping routes. The FORTRAN code developed is included, as well as the results of several runs of the test problem.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Nau, Lee J. « A Scalable, Memory Efficient Multicore TEIRESIAS Implementation ». Ohio University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1306856233.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ulis, Bradley J. « Stereo image correspondence methods for efficient hardware implementation ». Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FUlis.pdf.

Texte intégral
Résumé :
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Cristi, Roberto ; Second Reader: Fargues, Monique P. "June 2010." Description based on title screen as viewed on July 14, 2010. Author(s) subject terms: Stereo Correspondence, Ordinal Measures, 3D Reconstruction, Dynamic Programming. Includes bibliographical references (p. 75-77). Also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Johansson, John. « Efficient implementation of the Particle Level Set method ». Thesis, Linköping University, Media and Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59579.

Texte intégral
Résumé :

The Particle Level set method is a successful extension to Level set methods to improve thevolume preservation in fluid simulations. This thesis will analyze how sparse volume data structures can be used to store both the signed distance function and the particles in order to improve access speed and memory efficiency. This Particle Level set implementation will be evaluated against Digital Domains current Particle Level set implementation. Different degrees of quantization will be used to implement particle representations with varying accuracy. These particles will be tested and both visual results and error measurments will be presented. The sparse volume data structures DB-Grid and Field3D will be evaluated in terms of speed and memory efficiency.

Styles APA, Harvard, Vancouver, ISO, etc.
15

Sims, Oliver. « Efficient implementation of video processing algorithms on FPGA ». Thesis, University of Glasgow, 2007. http://theses.gla.ac.uk/4119/.

Texte intégral
Résumé :
The work contained in this portfolio thesis was carried out as part of an Engineering Doctorate (Eng.D) programme from the Institute for System Level Integration. The work was sponsored by Thales Optronics, and focuses on issues surrounding the implementation of video processing algorithms on field programmable gate arrays (FPGA). A description is given of FPGA technology and the currently dominant methods of designing and verifying firmware. The problems of translating a description of behaviour into one of structure are discussed, and some of the latest methodologies for tackling this problem are introduced. A number of algorithms are then looked at, including methods of contrast enhancement, deconvolution, and image fusion. Algorithms are characterised according to the nature of their execution flow, and this is used as justification for some of the design choices that are made. An efficient method of performing large two-dimensional convolutions is also described. The portfolio also contains a discussion of an FPGA implementation of a PID control algorithm, an overview of FPGA dynamic reconfigurability, and the development of a demonstration platform for rapid deployment of video processing algorithms in FPGA hardware.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Wei, Dennis. « Design of discrete-time filters for efficient implementation ». Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66470.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 325-333).
The cost of implementation of discrete-time filters is often strongly dependent on the number of non-zero filter coefficients or the precision with which the coefficients are represented. This thesis addresses the design of sparse and bit-efficient filters under different constraints on filter performance in the context of frequency response approximation, signal estimation, and signal detection. The results have applications in several areas, including the equalization of communication channels, frequency-selective and frequency-shaping filtering, and minimum-variance distortionless-response beamforming. The design problems considered admit efficient and exact solutions in special cases. For the more difficult general case, two approaches are pursued. The first develops low-complexity algorithms that are shown to yield optimal or near-optimal designs in many instances, but without guarantees. The second focuses on optimal algorithms based on the branch-and-bound procedure. The complexity of branch-and-bound is reduced through the use of bounds that are good approximations to the true optimal cost. Several bounding methods are developed, many involving relaxations of the original problem. The approximation quality of the bounds is characterized and efficient computational methods are discussed. Numerical experiments show that the bounds can result in substantial reductions in computational complexity.
by Dennis Wei.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Honorato, Mauro Jacob. « Wam based space efficient Prolog implementation in Lisp ». Universidade Federal de Uberlândia, 2015. https://repositorio.ufu.br/handle/123456789/17800.

Texte intégral
Résumé :
Esse trabalho propõe a implementação de um sistema Prolog eficiente no espaço, o mesmo é baseado nos trabalhos de David H. D. Warren e Hassan Aït-Kaci. A Common Lisp é a estrutura usada para a construção do sistema Prolog, ela foi escolhida tanto por fornecer um ambiente eficiente no espaço quando por ser uma linguagem de programação rica no sentido de que fornece ao usuário abstrações e novas maneiras de pensar. O sistema resultante consiste em uma nova sintaxe aplicada à linguagem inicial que funciona sobre a implementanção Common Lisp chamada SBCL e é capaz de abstrair ou explorar o sistema subjacente.
This thesis proposes the implementation of a space efficient Prolog implementation based on the work of David H. D. Warren and Hassan Aït-Kaci. The Common Lisp is the framework used to the construction of the Prolog system, it was chosen both to provide a space efficient environment and a rich programming language in the sense that it supply the user with abstractions and new ways of thinking. The resulting system is a new syntax to the initial language that runs on top of the SBCL Common Lisp implementation and can abstract away or exploit the underlying system.
Tese (Doutorado)
Styles APA, Harvard, Vancouver, ISO, etc.
18

Shuvo, Md Kamruzzaman. « Hardware Efficient Deep Neural Network Implementation on FPGA ». OpenSIUC, 2020. https://opensiuc.lib.siu.edu/theses/2792.

Texte intégral
Résumé :
In recent years, there has been a significant push to implement Deep Neural Networks (DNNs) on edge devices, which requires power and hardware efficient circuits to carry out the intensive matrix-vector multiplication (MVM) operations. This work presents hardware efficient MVM implementation techniques using bit-serial arithmetic and a novel MSB first computation circuit. The proposed designs take advantage of the pre-trained network weight parameters, which are already known in the design stage. Thus, the partial computation results can be pre-computed and stored into look-up tables. Then the MVM results can be computed in a bit-serial manner without using multipliers. The proposed novel circuit implementation for convolution filters and rectified linear activation function used in deep neural networks conducts computation in an MSB-first bit-serial manner. It can predict earlier if the outcomes of filter computations will be negative and subsequently terminate the remaining computations to save power. The benefits of using the proposed MVM implementations techniques are demonstrated by comparing the proposed design with conventional implementation. The proposed circuit is implemented on an FPGA. It shows significant power and performance improvements compared to the conventional designs implemented on the same FPGA.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Addluri, Ramya Krishna. « An Efficient Implementation of the Blowfish Encryption Algorithm ». University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406820252.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Smith, Craig M. « Efficient software implementation of the JBIG compression standard / ». Online version of thesis, 1993. http://hdl.handle.net/1850/11713.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Turkyilmaz, Ogun. « Emerging 3D technologies for efficient implementation of FPGAs ». Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT091/document.

Texte intégral
Résumé :
La complexité croissante des systèmes numériques amène les architectures reconfigurable telles que les Field Programmable Gate Arrays (FPGA) à être très fortement demandés en raison de leur facilité de (re)programmabilité et de leurs faibles coûts non récurrents (NRE). La re-configurabilité est réalisée grâce à de nombreux point mémoires de configuration. Cette re-configurabilité se traduit par une extrême flexibilité des applications implémentées et dans le même temps par une perte en surface, en performances et en puissance par rapport à des circuits intégrés spécifiques (ASIC) pour la même fonctionnalité. Dans cette thèse, nous proposons la conception de FPGA avec différentes technologies 3D pour une meilleure efficacité. Nous intégrons les blocs à base de mémoire résistives pour réduire la longueur des fils de routage et pour élargir l'employabilité des FPGAs pour des applications non-volatiles de faible consommation. Parmi les nombreuses technologies existantes, nous nous concentrons sur les mémoires à base d'oxyde résistif (OxRRAM) et les mémoires à pont conducteur (CBRAM) en évaluant les propriétés uniques de ces technologies. Comme autre solution, nous avons conçu un nouveau FPGA avec une intégration monolithique 3D (3DMI) en utilisant des interconnexions haute densité. A partir de deux couches avec l'approche logique-sur-mémoire, nous examinons divers schémas de partitionnement avec l'augmentation du nombre de couches actives intégrées pour réduire la complexité de routage et augmenter la densité de la logique. Sur la base des résultats obtenus, nous démontrons que plusieurs niveaux 3DMI est une alternative solide pour l'avenir de mise à l'échelle de la technologie
The ever increasing complexity of digital systems leads the reconfigurable architectures such as Field Programmable Gate Arrays (FPGA) to become highly demanded because of their in-field (re)programmability and low nonrecurring engineering (NRE) costs. Reconfigurability is achieved with high number of point configuration memories which results in extreme application flexibility and, at the same time, significant overheads in area, performance, and power compared to Application Specific Integrated Circuits (ASIC) for the same functionality. In this thesis, we propose to design FPGAs with several 3D technologies for efficient FPGA circuits. First, we integrate resistive memory based blocks to reduce the routing wirelength and widen FPGA employability for low-power applications with non-volatile property. Among many technologies, we focus on Oxide Resistive Memory (OxRRAM) and Conductive Bridge Resistive Memory (CBRAM) devices by assessing unique properties of these technologies in circuit design. As another solution, we design a new FPGA with 3D monolithic integration (3DMI) by utilizing high-density interconnects. Starting from two layers with logic-on-memory approach, we examine various partitioning schemes with increased number of integrated active layers to reduce the routing complexity and increase logic density. Based on the obtained results, we demonstrate that multi-tier 3DMI is a strong alternative for future scaling
Styles APA, Harvard, Vancouver, ISO, etc.
22

Lewis, Gregory Paul. « Repeated Reading : Testing Alternative Models for Efficient Implementation ». DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1171.

Texte intégral
Résumé :
Repeated reading has been used for over 30 years. In the publication of the National Reading Panel Report, repeated reading was listed as an effective strategy for developing fluency. Yet, repeated reading’s efficacy has been recently questioned. Understanding the “how-to” of efficiently using evidence-based practices would allow teachers to deliver successful, time-sensitive instruction and intervention to students. This study was based on two research questions. First was a gain score (increase between a student’s first read and their final repeated reading), a better model and therefore a better criterion than the currently popular criterion of reaching a set words-read-correctly-perminute (WRCM) hot read, such as Samuels’ criterion of 95 WRCM. The study’s second question was exploring which demographic variables, such as age, ethnicity, gender, current reading ability, and socioeconomic status (SES), played a significant role in predicting the effectiveness of using weekly repeated reading scores as a predictor of benchmark reading measures at midyear and end-of-year outcome measures. The study used a unique theoretical multilevel path model to explore repeated reading. A complex model was developed to study (a) the growth of a student’s ability to read words with speed and accuracy and (b) how student demographic features affect growth rates. It was found that a hot read advancement criterion provided a better model fit than the hypothesized advancement criterion of a student’s increase or gain between cold and hot reads. Student growth during repeated reading was found to be constant once a minimum WRCM criterion was reached. While repeated reading was shown to be a strategy that worked equally well for all students, the strategy was shown to be highlyeffective for English-language learners and showed promise in helping to closing the achievement gap. Limitations were discussed and recommendations provided.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Axell, Christian, et Mikael Brogsten. « Efficient WiMAX Receiver Implementation on a Programmable Baseband Processor ». Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7684.

Texte intégral
Résumé :

WiMAX provides broadband wireless access and uses OFDM as the underlying modulation technique. In an OFDM based wireless communication system, the channel will distort the transmitted signal and the performance is seriously degraded by synchronization mismatches between the transmitter and receiver. Therefore such systems require extensive digital signal processing of the received signal for retrieval of the transmitted information.

In this master thesis, parts of an IEEE 802.16d (WiMAX) receiver have been implemented on a programmable baseband processor. The implemented parts constitute baseband algorithms which compensates for the effects from the channel and synchronization errors. The processor has a new innovative architecture with an instruction set optimized for baseband applications.

This report includes theory behind the baseband algorithms as well as a presentation of how they are implemented on the processor. An impartial evaluation of the processor performance with respect to the algorithms used in the reference model is also presented in the report.

Styles APA, Harvard, Vancouver, ISO, etc.
24

Olgun, Muhammet Ertug. « Design And Fpga Implementation Of An Efficient Deinterleaving Algorithm ». Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609816/index.pdf.

Texte intégral
Résumé :
In this work, a new deinterleaving algorithm that can be used as a part of an ESM system and its implementation by using an FPGA is studied. The function of the implemented algorithm is interpreting the complex electromagnetic military field in order to detect and determine different RADARs and their types by using incoming RADAR pulses and their PDWs. It is assumed that RADAR signals in the space are received clearly and PDW of each pulse is generated as an input to the implemented algorithm system. Clustering analysis and a new interpreting process is used to deinterleave the RADAR pulses. In order to implement the algorithm, FPGA is used for achieving a faster and more efficient system. Comparison of the new algorithm and the previous deinterleaving studies is done. The simulation results are shown and discussed in detail.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Gunay, Hazan. « Efficient Fpga Implementation Of Image Enhancement Using Video Streams ». Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611448/index.pdf.

Texte intégral
Résumé :
This thesis is composed of three main parts
displaying an analog composite video input by via converting to digital VGA format, license plate localization on a video image and image enhancement on FPGA. Analog composite video input, either PAL or NTSC is decoded on a video decoder board
then on FPGA, video data is converted from 4:2:2 YCbCr format to RGB. To display RGB data on the screen, line doubling de-interlacing algorithm is used since it is efficient considering computational complexity and timing. When taking timing efficiency into account, image enhancement is applied only to beneficial part of the image. In this thesis work, beneficial part of the image is considered as numbered plates. Before image enhancement process, the location of the plate on the image must be found. In order to find the location of plate, a successful method, edge finding is used. It is based on the idea that the plate is found on the rows, where the brightness variation is largest. Because of its fast execution, band-pass filtering with finite response (FIR) is used for highlighting the high contrast areas. Image enhancement with rank order filter method is chosen to remove the noise on the image. Median filter, a rank order filter, is designed and simulated. To improve image quality while reducing the process time, the filter is applied only to the part of the image where the plate is. Design and simulation is done using hardware design language VHDL. Implementations of the chosen approaches are done on MATLAB and Xilinx Virtex-2 Pro FPGA. Improvement of the implementation considering speed and area is evaluated.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Li, Zhipeng Ph D. Massachusetts Institute of Technology. « Efficient baseband design and implementation for high-throughput transmitters ». Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101465.

Texte intégral
Résumé :
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 139-145).
Wireless communications are accelerating into the realm of higher data rates from hundreds of megabits to tens of gigabits per second. Increase in data rate requires higher throughput and higher utilization of spectral bandwidth. At the same time, we are seeing a demand for smaller chipsets with lower power budgets. Digital basebands with increased energy-efficiency are needed while fitting within tight area constraints. High spectral efficiency demands modulation schemes with high peak to average power ratio, increasing the precision requirements on the digital baseband circuitry. To enable a new class of energy-efficient millimeter wave communication systems based on outphasing power amplifiers (PAs), we have explored ways to implement high-throughput outphasing baseband functions with the smallest energy and area footprints. Aware of the limitations of field-programmable gate arrays (FPGA) in throughput and energy-efficiency, we have chosen to implement our digital baseband in application-specific integrated circuits to allow a truly integrated energy-efficient transmitter. By utilizing the changes in micro-architecture (parallelism and pipelining) and aggressive back-end power optimization techniques (noncritical path Vt re- placement and sizing reductions), we achieve a record energy-efficiency and through- put for asymmetric-multilevel-outphasing (AMO) signal component separator (SCS) of 32pJ/sample at 0.6V supply voltage and 400Msamples/s, with an area of 0.41mm². For high-throughput area-constrained applications, our static random-access memo- ries based AMO SCS design achieves 2× area reduction over the register-based design at the same throughput to allow more parallelism to meet the stringent throughput requirements. To compensate for system nonlinearity and memory effects, we implement a zero- avoidance shaping filter in place of the traditional shaping filter to improve con- vergence in model iterations of an outphasing transmitter, and design an energy- and area-efficient digital predistorter (DPD). We use this DPD architecture to compensate for nonideal phase modulation, preamplifier saturation, and many transmitter nonidealities. Applying this developed methodology in spice-level simulation, we improve adjacent-channel-power-ratio (ACPR) of the outphasing Q-band (45GHz) transmitter with 1.1Gsamples/s throughput from -30.6dB to -44.0dB and reduced error vector magnitude (EVM) from 4.5% down to 1.0% with 64-Quadrature-Amplitude-Modulation (64QAM) and real-time zero avoidance. The energy efficiency of this predistorter at a throughput of 1.1Gsamples/s (3.3Gbps data rate with 64QAM modulation and oversampling ratio of 2) is 1.5nJ/sample. To illustrate the wide applicability of this proposed linearization methodology, we applied it to compensate for distortion in a radio-frequency PA. We apply the off-line iterative compensation method to a PA with 1.97GHz carrier frequency and 737Mbps data throughput with 64QAM. We map the designed DPD structure onto FPGAs with a utilization of 144 DSP slices and an energy efficiency of 1.7nJ/sample. To meet an ACPR constraint of -48dB, the uncompensated PA has to back-off the input power by 12dB with 3.3% power efficiency. The compensated PA has to back-off by only 6dB with 9.2% overall transmitter power efficiency which includes the DPD power, almost 3× the efficiency of uncompensated PA.
by Zhipeng Li.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Alam, Syed Asad. « Techniques for Efficient Implementation of FIR and Particle Filtering ». Doctoral thesis, Linköpings universitet, Datorteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124195.

Texte intégral
Résumé :
FIR filters occupy a central place many signal processing applications which either alter the shape, frequency or the sampling frequency of the signal. FIR filters are used because of their stability and possibility to have linear-phase but require a high filter order to achieve the same magnitude specifications as compared to IIR filters. Depending on the size of the required transition bandwidth the filter order can range from tens to hundreds to even thousands. Since the implementation of the filters in digital domain requires multipliers and adders, high filter orders translate to a large number of these arithmetic units for its implementation. Research towards reducing the complexity of FIR filters has been going on for decades and the techniques used can be roughly divided into two categories; reduction in the number of multipliers and simplification of the multiplier implementation.  One technique to reduce the number of multipliers is to use cascaded sub-filters with lower complexity to achieve the desired specification, known as FRM. One of the sub-filters is a upsampled model filter whose band edges are an integer multiple, termed as the period L, of the target filter's band edges. Other sub-filters may include complement and masking filters which filter different parts of the spectrum to achieve the desired response. From an implementation point-of-view, time-multiplexing is beneficial because generally the allowable maximum clock frequency supported by the current state-of-the-art semiconductor technology does not correspond to the application bound sample rate. A combination of these two techniques plays a significant role towards efficient implementation of FIR filters. Part of the work presented in this dissertation is architectures for time-multiplexed FRM filters that benefit from the inherent sparsity of the periodic model filters. These time-multiplexed FRM filters not only reduce the number of multipliers but lowers the memory usage. Although the FRM technique requires a higher number delay elements, it results in fewer memories and more energy efficient memory schemes when time-multiplexed. Different memory arrangements and memory access schemes have also been discussed and compared in terms of their efficiency when using both single and dual-port memories. An efficient pipelining scheme has been proposed which reduces the number of pipelining registers while achieving similar clock frequencies. The single optimal point where the number of multiplications is minimum for non-time-multiplexed FRM filters is shown to become a function of both the period, L and time-multiplexing factor, M. This means that the minimum number of multipliers does not always correspond to the minimum number of multiplications which also increases the flexibility of implementation. These filters are shown to achieve power reduction between 23% and 68% for the considered examples. To simplify the multiplier, alternate number systems like the LNS have been used to implement FIR filters, which reduces the multiplications to additions. FIR filters are realized by directly designing them using ILP in the LNS domain in the minimax sense using finite word length constraints. The branch and bound algorithm, a typical algorithm to implement ILP problems, is implemented based on LNS integers and several branching strategies are proposed and evaluated. The filter coefficients thus obtained are compared with the traditional finite word length coefficients obtained in the linear domain. It is shown that LNS FIR filters provide a better approximation  error compared to a standard FIR filter for a given coefficient word length. FIR filters also offer an opportunity in complexity reduction by implementing the multipliers using Booth or standard high-radix multiplication. Both of these multiplication schemes generate pre-computed multiples of the multiplicand which are then selected based on the encoded bits of the multiplier. In TDF FIR filters, one input data is multiplied with a number of coefficients and complexity can be reduced by sharing the pre-computation of the multiplies of the input data for all multiplications. Part of this work includes a systematic and unified approach to the design of such computation sharing multipliers and a comparison of the two forms of multiplication. It also gives closed form expressions for the cost of different parts of multiplication and gives an overview of various ways to implement the select unit with respect to the design of multiplexers. Particle filters are used to solve problems that require estimation of a system. Improved resampling schemes for reducing the latency of the resampling stage is proposed which uses a pre-fetch technique to reduce the latency between 50% to 95%  dependent on the number of pre-fetches. Generalized division-free architectures and compact memory structures are also proposed that map to different resampling algorithms and also help in reducing the complexity of the multinomial resampling algorithm and reduce the number of memories required by up to 50%.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Hosseini, Ehsan, et Gino Rea. « Hardware-Efficient Implementation of the SOVA for SOQPSK-TG ». International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605932.

Texte intégral
Résumé :
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
In this paper, we present a hardware-efficient architecture of a demodulator for shaped offset quadrature phase shift keying, telemetry group version (SOQPSK-TG). The demodulation is done using the soft-output Viterbi algorithm (SOVA), which is implemented by the two-step traceback method. In this method, two traceback operations are employed to find the maximum-likelihood (ML) path and the competing path. Using the proposed architecture, the tracebacks are done at the same time as the demodulator is generating output bits and their reliabilities. This method has been shown to require less storage than the well-known register-exchange method. Finally, we present the performance results from the FPGA implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Maze, Sheldon. « Efficient implementation of the Heston-Hull & ; White model ». Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/8521.

Texte intégral
Résumé :
Includes bibliographical references.
A model with a stochastic interest rate process correlated to a stochastic volatility process is needed to accurately price long- dated contingent claims. Such a model should also price claims efficiently in order to allow for fast calibration. This dissertation explores the approximations for the characteristic function of the Heston-Hull&White model introduced by Grzelak and Oost- erlee (2011). Fourier-Cosine expansion pricing, due to Fang and Oosterlee (2008), is then used to price contingent claims under this model, which is implemented in MATLAB. We find that the model is efficient, accurate and has a relatively simple calibration procedure. In back-tests, it is determined that the Heston- Hull&White model produces better hedging profit and loss results than a Heston (1993) or a Black and Scholes (1973) model.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Lien, E.-Jen. « EFFICIENT IMPLEMENTATION OF ELLIPTIC CURVE CRYPTOGRAPHY IN RECONFIGURABLE HARDWARE ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1333761904.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Christman, Jordan Louis. « Efficient Digital Spotlighting Phase History Re-Centering Hardware Implementation ». University of Dayton / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1480934083897465.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Heyne, Benjamin. « Efficient CORDIC based implementation of selected signal processing algorithms ». Aachen Shaker, 2008. http://d-nb.info/991790073/04.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Alexander, Steven Wilson. « Efficient arithmetic for high speed DSP implementation on FPGAs ». Thesis, Connect to e-thesis, 2007. http://theses.gla.ac.uk/856/.

Texte intégral
Résumé :
Thesis (Eng.D.) - University of Glasgow, 2007.
Eng.D. thesis submitted to the Faculty of Engineering, Department of Civil Engineering, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Bak, Christopher. « GP 2 : efficient implementation of a graph programming language ». Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/12586/.

Texte intégral
Résumé :
The graph programming language GP (Graph Programs) 2 and its implementation is the subject of this thesis. The language allows programmers to write visual graph programs at a high level of abstraction, bringing the task of solving graph-based problems to an environment in which the user feels comfortable and secure. Implementing graph programs presents two main challenges. The first challenge is translating programs from a high-level source code representation to executable code, which involves bridging the gap from a non-deterministic program to deterministic machine code. The second challenge is overcoming the theoretically impractical complexity of applying graph transformation rules, the basic computation step of a graph program. The work presented in this thesis addresses both of these challenges. We tackle the first challenge by implementing a compiler that translates GP 2 graph programs directly to C code. Implementation strategies concerning the storage and access of internal data structures are empirically compared to determine the most efficient approach for executing practical graph programs. The second challenge is met by extending the double-pushout approach to graph transformation with root nodes to support fast execution of graph transformation rules by restricting the search to the local neighbourhood of the root nodes in the host graph. We add this theoretical construct to the GP 2 language in order to support rooted graph transformation rules, and we identify a class of rooted rules that are applicable in constant time on certain classes of graphs. Finally, we combine theory and practice by writing rooted graph programs to solve two common graph algorithms, and demonstrate that their execution times are capable of matching the execution times of tailored C solutions.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Katreepalli, Raghava. « Efficient VLSI Implementation of Arithmetic Units and Logic Circuits ». OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1471.

Texte intégral
Résumé :
Arithmetic units and logic circuits are critical components of any VLSI system. Thus realizing efficient arithmetic units and logic circuits is required for better performance of a data path unit and therefore microprocessor or digital signal processor (DSP). Adders are basic building blocks of any processor or data path application. For the design of high performance processing units, high-speed adders with low power consumption is a requirement. Carry Select Adder (CSA) is known to be one of the fastest adders used in many data processing applications. This first contribution of the dissertation is the design of a new CSA architecture using Manchester carry chain (MCC) in multioutput domino CMOS logic. It employs a novel MCC blocks in a hierarchical approach in the design of the CSA. The proposed MCC block is also extended in designing a power-delay and area efficient Vedic multiplier based on "Urdhva-Tiryakbhyam”. The simulation results shows that the proposed architecture achieves two fold advantages in terms of power-delay product (PDP) and hardware overhead. Apart from adders and multipliers, counters also play a major role in a data path unit. Counters are basic building blocks in many VLSI applications such as timers, memories, ADCs/DACs, frequency dividers etc. It is observed that design of counters has power overhead because of requirement of high power consumption for the clock signal distribution and undesired activity of flip-flops due to presence of clocks. The second contribution of the dissertation is the power efficient design of synchronous counters that reduces the power consumption due to clock distribution for different flip-flops and offers high reliability. The simulation results shows that the proposed counter design has lower power requirement and power-area product than existing counter architectures. Pipelines can be used for achieving high circuit operating speeds. However, as the operating frequency increases, the number of pipeline stages also increase linearly and so the memory elements. The third contribution of the dissertation is the dynamic memory-less pipeline design based on sinusoidal three-phase clocking scheme that reduces the power required by the clock and offers high circuit operating frequencies. Finally, the dissertation presents a novel tool for Boolean-function realization with minimum number of transistor in series. This tool is based on applying a new functional decomposition algorithms to decompose the initial Boolean-function into a network of smaller sub-functions and subsequently generating the final circuit. The effectiveness of proposed technique is estimated using circuit level simulations as well as using automated tool. The number of levels required using proposed technique is reduced by an average of 70% compared to existing techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Nurrito, Eugenio. « Scattering networks : efficient 2D implementation and application to melanoma classification ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12261/.

Texte intégral
Résumé :
Machine learning is an approach to solving complex tasks. Its adoption is growing steadily and the several research works active on the field are publishing new interesting results regularly. In this work, the scattering network representation is used to transform raw images in a set of features convenient to be used in an image classification task, a fundamental machine learning application. This representation is invariant to translations and stable to small deformations. Moreover, it does not need any sort of training, since its parameters are fixed and only some hyper-parameters must be defined. A novel, efficient code implementation is proposed in this thesis. It leverages on the power of GPUs parallel architecture in order to achieve performance up to 20× faster than earlier codes, enabling near real-time applications. The source code of the implementation is also released open-source. The scattering network is then applied on a complex dataset of textures to test the behaviour in a general classification task. Given the conceptual complexity of the database, this unspecialized model scores a mere 32.9 % of accuracy. Finally, the scattering network is applied to a classification task of the medical field. A dataset of images of skin lesions is used in order to train a model able to classify malignant melanoma against benign lesions. Malignant melanoma is one of the most dangerous skin tumor, but if discovered in early stage there are generous probabilities to recover. The trained model has been tested and an interesting accuracy of 70.5 % (sensitivity 72.2 %, specificity 70.0 %) has been reached. While not being values high enough to permit the use of the model in a real application, this result demonstrates the great capabilities of the scattering network representation.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Larsson, Fredrik. « Efficient implementation of model-checkers for networks of timed automata ». Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-226511.

Texte intégral
Résumé :
Since real-time systems often operate in safety-critical environments it is extremely important that they function correctly. UPPAAL is a tool that can be used for validation and verification of real-time systems. The user models the system using networks of timed automata and uses a simple logic to express safety requirements that the modelled system must satisfy to guarantee its correct behaviour. UPPAAL then performs reachability analysis using constraint solving techniques to check if the model satisfies the given requirements. In addition, the tool is also able to provide the user with a sample execution that explains why a requirement is (or is not) satisfied by the model. The analysis is fully automated. This thesis describes various techniques adopted when implementing UPPAAL. Some of the techniques have improved the performance of UPPAAL significantly. We have studied the techniques with performance measurements in several case-studies. One of the main contributions is the comparison of different strategies in implementing the basic data structures and searching algorithms. The measurements can be used as hints on what parts of the model-checker that are most important to optimise. Though the techniques are studied in the context of timed automata, we believe that they are applicable to the implementation of general software tools for automated analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Bengtsson, Johan. « Efficient symbolic state exploration of timed systems : Theory and implementation ». Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86016.

Texte intégral
Résumé :
Timing aspects are important for the correctness of safety-critical systems. It is crucial that these aspects are carefully analysed in designing such systems. UPPAAL is a tool designed to automate the analysis process. In UPPAAL, a system under construction is described as a network of timed automata and the desired properties of the system can be specified using a query language. Then UPPAAL can be used to explore the state space of the system description to search for states violating (or satisfying) the properties. If such states are found, the tool provides diagnostic information, in form of executions leading to the states, to help the desginers, for example, to locate bugs in the design. The major problem for UPPAAL and other tools for timed systems to deal with industrial-size applications is the state space explosion. This thesis studies the sources of the problem and develops techniques for real-time model checkers, such as UPPAAL, to attack the problem. As contributions, we have developed the notion of committed locations to model atomicity and local-time semantics for timed systems to allow partial order reductions, and a number of implementation techniques to reduce time and space consumption in state space exploration. The techniques are studied and compared by case studies. Our experiments demonstrate significant improvements on the performance of UPPAAL.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Kwok, Hok-sum, et 郭學深. « The implementation of energy efficient strategies in Hong Kong buildings ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31254925.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

Intrachooto, Singh. « Technological innovation in architecture : effective practices for energy efficient implementation ». Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8513.

Texte intégral
Résumé :
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.
Some ill. printed as leaves, numbered as pages, and folded.
Includes bibliographical references (p. 241-248).
The objective of this research is to simultaneously address the environmental concerns in building design and the urgency in the architectural, engineering, and construction industry (AEC) to advance technologically by providing specific responses to the following questions. What are the barriers that a design team faces when introducing environmental strategies and innovations into building projects? What are the mechanisms that can assist design teams to surpass industry standards or even break away from the limits of their own professional training? Ultimately, what is required to successfully implement environmentally sound and technologically innovative solutions in buildings? In order to gain a better insight into these issues, this research examines eight case studies and reconstructs their respective patterns of practices to discover how and why certain AEC teams successfully overcome design, development, and implementation barriers relating to energy efficient innovation (EEI) while most do not. The results of the study are categorized into four distinct, but related, components: (1) implementation techniques, (2) basic team attributes, (3) critical success factors, and (4) the implementation process. Contrary to popular belief, the findings suggest that technological innovation, specifically EEI, is best fostered by team members with prior work experience with each other, as opposed to an assembly of individuals selected solely on the basis of expertise. The repeated collaborations serve multiple functions: technical-risk reduction, financial security, and psychological assurance.
(cont.) In addition, six key factors of EEI implementation are isolated and organized into two groups: team dynamics and project logistics. Team dynamics encompasses concurrent collaboration, team relational competence, and commitment to environmental goals. Project logistics encompasses external funding; research collaboration; and technical evaluation, demonstration, and validation. A strong relationship was found between the integrated design process and the commitment to EEI. Specifically, contributors of EEI worked in parallel with an expedient feedback loop or explicit feedback period. Interestingly, financial contributions external to the clients' allocated budgets were consistently found and often related to the particular research of at least one member within the team. The direct relationship between research and the resultant innovation suggests that technological innovation is not random, but rather predictable and specific to team members' areas of expertise.
by Singh Intrachooto.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Li, Tiancheng. « Efficient particle implementation of Bayesian and probability hypothesis density filtering ». Thesis, London South Bank University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.631738.

Texte intégral
Résumé :
Using a set of samples that are associated with weights (namely the particle method) to represent the distribution of interest for filtering under very general hypotheses (often referred to as the sequential Monte Carlo, SMC approaches or particle filters) has gained high attention in the last two decades. However, the particle method suffers from problems such as sample depletion, huge computational time and challenges raised in multi-target tracking (MTT). Aiming to address these problems and challenges, this thesis investigates efficient particle filtering from two perspectives that deal with different number of objects of interest: single-object and multi-object. On one side, novel resampling schemes, fast implementations of particle filters are developed under the Bayesian framework. On the other side, improved particle implementations of the probability hypothesis density (PHD) filter, namely the particle PHD filters are presented to deal with MTT. Resampling is a critical step in the implementation of particle filters that is of practical and theoretical significance. Firstly, various resampling methods and new developments are compared and classified into different categories, providing a comprehensive overview of resampling methods. General discussions about statistical effects of resampling are given with emphasis on robustness and identical distribution testing. New deterministic, adaptive and fast resampling schemes are put forward separately. Further, to increase the computing speed of the particle filter, a fast likelihood computing method based on numerical fitting is proposed, where the likelihood of particles is numerically fitted by the likelihood probability density function (Li-PDF) instead of directly computing it based on measurements. This is the first attempt that applies numerical fitting to enable real-time particle filtering.
Styles APA, Harvard, Vancouver, ISO, etc.
42

FRASCA, CACCIA GIANLUCA. « A new efficient implementation for HBVMs and their application to the semilinear wave equation ». Doctoral thesis, 2015. http://hdl.handle.net/2158/992629.

Texte intégral
Résumé :
In this thesis we have provided a detailed description of the low-rank Runge-Kutta family of Hamiltonian Boundary Value Methods (HBVMs) for the numerical solution of Hamiltonian problems. In particular, we have studied in detail their main property: the conservation of polynomial Hamiltonians, which results into a practical conservation for generic suitably regular Hamiltonians. This property turns out to play a fundamental role in some problems where the error on the Hamiltonian, usually obtained even when using a symplectic method, would be not negligible to the point of affecting the dynamics of the numerical solution. The research developed in this thesis has addressed two main topics. The first one is a new procedure, based on a particular splitting of the matrix defining the method, which turns out to be more effective of the well-known blended-implementation, as well as of a classical fixed-point iteration when the problem at hand is stiff. This procedure has been applied also to second order problems with separable Hamiltonian function, resulting in a cheaper computational cost. The second topic addressed is the application of HBVMs for the full discretization of a method of lines approach to numerically solve Hamiltonian PDEs. In particular, we have considered the semilinear wave equation coupled with either periodic, Dirichlet or Neumann boundary conditions, and the application of a (practically) energy conserving HBVM method to the semi-discrete problem obtained by means of a second order finite-difference approximation in space. When the problem is coupled with periodic boundary conditions we have also considered the case of higher-order finite-difference spatial discretizations and the case when a Fourier-Galerkin method is used for the spatial semi-discretization. The proposed methods are able to provide a numerical solution such that the energy (which can be conserved or not, depending on the assigned boundary conditions) practically satisfies its prescribed variation in time. A few numerical tests for the sine-Gordon equation have given evidence that, for some problems, there is an effective advantage in using an energy-conserving method for the time integration, with respect to the use of a symplectic one. Moreover, even though HBVMs are implicit method, their computational cost for the considered problem turns out to be competitive even with respect to that of explicit solvers of the same order, which, furthermore, may suffer from stepsize restrictions due to stability reasons, whereas HBVMs are A-stable.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Li, Yu-Lun, et 李有倫. « Efficient implementation of MP3 decoder ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/65766481241115154755.

Texte intégral
Résumé :
碩士
國立中正大學
資訊工程研究所
90
The Moving Picture Experts Group-1 / Layer3(MP3)algorithm on software application becomes more and more great. Therefore, to use hardware to implement MP3 obviously become a new field of course. In decoder section, because it needs to match the standard of real-time which makes it even better to present it by hardware. The objective for this thesis is to implement MP3 decoder by using hardware. Moreover, we decide to use Field Programmable Gate Array〈FPGA〉as for our design environment. We use software to do MP3 decode, because it is easier on calculation. Therefore, we don’t have any restriction on memory size. However, if we don’t consider this problem when we are designing our hardware, it would increase the price and the power consumption. That’s why we suggest a more effective memory management method according to some characteristic of MP3 decode. This method can reduce the memory size use and it can also maintain the real-time standard.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Wang, Guo-Ting, et 王國婷. « Efficient Implementation of FIDO UAF Client ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/22680485274516945129.

Texte intégral
Résumé :
碩士
國立臺灣大學
電子工程學研究所
104
With the popularity of mobile phones and tablets, more and more people surf the Internet with mobile devices. When users log in a website, in contrast to using traditional PCs, typing the password is very troublesome on mobile phones. However, the most commonly used authentication is still password-based. Thus, users usually record their password on browsers or apps after the first login. These security issues become apparent on mobile devices. Apart from using ”password”, there are several authentication solutions with higher security. For example, adding one-time password or hardware token for two-factor authentication are good choices. Most of those solutions are not adopted because of the cumbersome steps. FIDO Alliance is formed to address the problems of password and authentication. They develop specification of password-less solution. With biometric device, the ecosystem oflogin is secure and convenient. Users only need to pass identity verification,then they can successfully log in. There are few open resources related to standard published by FIDO except to the official documents. In this thesis, we implement the client part sothat more people can refer to it and pay attention to this issue.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Lu, Yi-shan, et 呂易珊. « Efficient Implementation of the Weil Pairing ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/q6664b.

Texte intégral
Résumé :
碩士
國立中山大學
資訊工程學系研究所
97
The most efficient algorithm for solving the elliptic curve discrete logarithm problem can only be done in exponential time. Hence, we can use it in many cryptographic applications. Weil pairing is a mapping which maps a pair of points on elliptic curves to a multiplicative group of a finite field with nondegeneracy and bilinearity. Pairing was found to reduce the elliptic curve discrete logarithm problem into the discrete logarithm problem of a finite field, and became an important issue since then. In 1986, Miller proposed an efficient algorithm for computing Weil pairings. Many researchers focus on the improvement of this algorithm. In 2006, Blake et al. proposed the reduction of total number of lines based on the conjugate of a line. Liu et al. expanded their concept and proposed two improved methods. In this paper, we use both NAF and segmentation algorithm to implement the Weil pairing and analyse its complexity.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Pahlevaninezhad, Hamid. « Design and implementation of efficient terahertz waveguides ». Thesis, 2012. http://hdl.handle.net/1828/3977.

Texte intégral
Résumé :
In this thesis, novel broadband waveguides capable of operating at terahertz (THz) frequencies are introduced. We explore in detail the two-wire waveguide showing that it can have absorption as low as 0.01 cm-1, fairly good coupling efficiency, and is free from group-velocity dispersion (GVD). We also propose two low loss, planar slot-line structures for guiding THz waves. Rigorous theoretical analyses, numerical simulations, and experimental results are presented to evaluate and verify the performance of the waveguides at THz frequencies. We also present a tapered structure to couple effectively THz waves from a photoconductive source to a two-wire waveguide. Finally, practical structures to realize the first THz low-loss cable using the two-wire waveguide are introduced.
Graduate
Styles APA, Harvard, Vancouver, ISO, etc.
47

Gonçalves, Hélder José Alves. « Towards an efficient lattice basis reduction implementation ». Master's thesis, 2016. http://hdl.handle.net/1822/47824.

Texte intégral
Résumé :
Dissertação de mestrado em Engenharia Informática (área de especialização em Computação Paralela e Distribuída)
The security of most digital systems is under serious threats due to major technology breakthroughs we are experienced in nowadays. Lattice-based cryptosystems are one of the most promising post-quantum types of cryptography, since it is believed to be secure against quantum computer attacks. Their security is based on the hardness of the Shortest Vector Problem and Closest Vector Problem. Lattice basis reduction algorithms are used in several fields, such as lattice-based cryptography and signal processing. They aim to make the problem easier to solve by obtaining shorter and more orthogonal basis. Some case studies work with numbers with hundreds of digits to ensure harder problems, which require Multiple Precision (MP) arithmetic. This dissertation presents a novel integer representation for MP arithmetic and the algorithms for the associated operations, MpIM. It also compares these implementations with other libraries, such as GNU Multiple Precision Arithmetic Library, where our experimental results display a similar performance and for some operations better performances. This dissertation also describes a novel lattice basis reduction module, LattBRed, which included a novel efficient implementation of the Qiao’s Jacobi method, a Lenstra-LenstraLovasz (LLL) algorithm and associated parallel implementations, a parallel variant of the ´ Block Korkine-Zolotarev (BKZ) algorithm and its implementation and MP versions of the the Qiao’s Jacobi method, the LLL and BKZ algorithms. Experimental performances measurements with the set of implemented modifications of the Qiao’s Jacobi method show some performance improvements and some degradations but speedups greater than 100 in Ajtai-type bases.
Atualmente existe um grande avanço tecnológico que poderá colocar em causa a segurança da maioria dos sistemas informáticos. Sistemas criptográficos baseados em reticulados são um dos mais promissores tipos de criptografia pós-quântica, uma vez que se acredita que estes sistemas são seguros contra possíveis ataques de computadores quânticos. A segurança destes sistemas está baseada na dificuldade de resolver o problema do vetor mais curto e o problema do vetor mais próximo. Algoritmos de redução de bases de reticulados são usados em muitos campos científicos, tais como criptografia baseada em reticulados. O seu principal objetivo e tornar o problema mais fácil de resolver, tornando a base do reticulado mais curta e ortogonal. Alguns casos de estudo requerem o uso de números com centenas de dígitos para garantir problemas mais difíceis. Portanto, é importante o uso de módulos de precisão múltipla. Esta dissertação apresenta uma nova representação de inteiros para aritmética de precisão múltipla e todas as respetivas funções de um módulo, ‘MpIM’. Também comparamos as nossas implementações com outras bibliotecas de precisão múltipla, tais como ‘GNU Multiple Precision Arithmetic Library’, em que obtivemos desempenhos semelhantes e em alguns casos melhores. A dissertação também apresenta um novo módulo para a redução de bases de reticulados, ‘MpIM’, que inclui uma nova e eficiente implementação do ‘Qiao’s Jacobi method’, o algoritmo ‘Lenstra-Lenstra-Lovasz’ (LLL) e respetiva implementação paralela, uma variante paralela do algoritmo ‘Block Korkine-Zolotarev’ (BKZ) e a sua versão sequencial e versões de precisão múltipla do ‘Qiao’s Jacobi method’, LLL e BKZ. Trabalhos experimentais mostraram que a versão do ‘Qiao’s Jacobi method’ que implementa todas as modificações sugeridas mostra ganhos e degradações de desempenho, contudo com aumentos de desempenho superiores a 100 vezes em bases ‘Ajtai-type’.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Wang, Te-Chuan, et 王得權. « Efficient FFT Implementation Using CORDIC-Based Arithmetic ». Thesis, 2002. http://ndltd.ncl.edu.tw/handle/43260882553028239700.

Texte intégral
Résumé :
碩士
國立中正大學
資訊工程研究所
90
This thesis presents an efficient implementation of the pipeline FFT processor based on radix-4 decimation-in-time algorithm with the use of CORDIC-based arithmetic units. By recombining the sequential input samples to parallel data streams, the proposed architecture can’t only achieve nearly fully hardware utilization, but also require much less memory compared with the previous FFT processor. In addition, in FFT processors, several modules of ROM are required for the storage of twiddle factors. Exploiting the redundancy of the factors and using the CORDIC control information instead of multiplicands can efficiently reduce the overall ROM size by a factor of 4.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Chen, Jian-Yu, et 陳建宇. « Efficient LDPC Decoder Implementation for IEEE 802.16e ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/23zfhb.

Texte intégral
Résumé :
碩士
國立中正大學
通訊工程研究所
100
Since the performance of Low-Density Parity Check (LDPC)codes is very close to Shannon limit, the related work has been widely discussed in the field of channel codes. LDPC codes are suitable for parallel decoding to achieve the high throughput due to the feature of highly parallelizable decoding architecture. Therefore, LDPC codes are preferable in many digital communication standards, such as the WiMax (802.16e) and Wi-Fi (802.11n). However, memory access conflict frequently occurred in the sub-matrices with same offsets when the fast decoding of Quasi-Cyclic low-density check (QC-LDPC) codes is applied with parallelization. This results in decoding latency and low throughput. In this thesis, we propose a efficient design of a parallel LDPC decoder, for 802.16e. In column-major decoding process, the storage of all check-to-variable messages in a row of parity check matrix is organized as an efficient chunk. This method should verify the offsets of all sub-matrices in a block row to prevent memory collisions. A pre-process is also proposed to indicate the degree of parallelization with the set of column. Each column set contains the columns which can be processed in parallel. Besides, decoding latency can be minimized. Experimental results in the hardware implementation show that the throughput of proposed LDPC decoder can be 241 Mbps in 184MHz.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Wan-TingWeng et 翁琬婷. « High Efficient VLSI Implementation of Canny Edge Detection ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/6898t2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie