Dissertations / Theses on the topic 'Logics of design'

To see the other types of publications on this topic, follow the link: Logics of design.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Logics of design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yim, Sungshik. "A Retrieval Method (DFM Framework) for Automated Retrieval of Design for Additive Manufacturing Problems." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14553.

Full text
Abstract:
Problem: The process planning task for a given design problem in additive manufacturing can be greatly enhanced by referencing previously developed process plans. However, identifying appropriate process plans for the given design problem requires appropriate mapping between the design domain and the process planning domain. Hence, the objective of this research is to establish mathematical mapping between the design domain and the process planning domain such that the previously developed appropriate process plans can be identified for the given design task. Further more, identification of an appropriate mathematical theory that enables computational mapping between the two domains is of interest. Through such computational mapping, previously developed process plans are expected to be shared in a distributed environment using an open repository. Approach: The design requirements and process plans are discretized using empirical models that compute exact values of process variables for the given design requirements. Through this discretization, subsumption relations among the discretized design requirements and process plans are identified. Appropriate process plans for a given design requirement are identified by subsumption relations in the design requirements. Also, the design requirements that can be satisfied by the given process plans are identified by subsumption relations among the process plans. To computationally realize such mapping, a description logic (ALE) is identified and justified to represent and compute subsumption relation. Based on this investigation, a retrieval method (DFM framework) is realized that enables storage and retrieval of process plans. Validation: Theoretical and empirical validations are performed using the validation square method. For the theoretical validation, an appropriate description logic (ALE) is identified and justified. Also, subsumption utilization in mapping two domains and realizing the DFM framework is justified. For the empirical validation, the storing and retrieval performance of the DFM framework is tested to demonstrate its theoretical validity. Contribution: In this research, two areas of contributions are identified: DFM and engineering information management. In DFM, the retrieval method that relates the design problem to appropriate process plans through mathematical mapping between design and process planning domain is the major contribution. In engineering information management, the major contributions are the development of information models and the identification of their characteristics. Based on this investigation, an appropriate description logic (ALE) is selected and justified. Also, corresponding computational feasibility (non deterministic polynomial time) of subsumption is identified.
APA, Harvard, Vancouver, ISO, and other styles
2

Yim, Sungshik. "A retrieval method (DF FRAMEWORK) for automated retrieval of design for additive manufacturing problems." Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-03012007-113030/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2007.
Nelson Baker, Committee Member ; Charles Eastman, Committee Member ; Christiaan Paredis, Committee Member ; Janet Allen, Committee Member ; David Rosen, Committee Chair.
APA, Harvard, Vancouver, ISO, and other styles
3

Romero, Moral Óscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.
Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.

Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).
Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.

En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.

1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.
2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.

Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.

Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.
In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.

1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.
2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.

Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.
So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
APA, Harvard, Vancouver, ISO, and other styles
4

Romero, Moral Oscar. "Automating the multidimensional design of data warehouses." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6670.

Full text
Abstract:
Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.
Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literature
APA, Harvard, Vancouver, ISO, and other styles
5

Forrest, Denise B. "Investigating the logics secondary mathematics teachers employ when creating verbal messages for students: an instance for bridging communication theory into mathematics education." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1127218988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tarnoff, David. "Episode 4.03 – Combinational Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/31.

Full text
Abstract:
Individual logic gates are not very practical. Their power comes when you combine them to create combinational logic. This episode takes a look at combinational logic by working through an example in order to generate its truth table.
APA, Harvard, Vancouver, ISO, and other styles
7

Tarnoff, David. "Episode 5.02 – NAND Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wunderlich, Richard Bryan. "CMOS gate delay, power measurements and characterization with logical effort and logical power." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31652.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Paul Hasler; Committee Member: David V Anderson; Committee Member: Saibal Mukhopadhyay. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Husemann, Ronaldo. "Arquitetura de co-projeto hardware/software para implementação de um codificador de vídeo escalável padrão H.264/SVC." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49305.

Full text
Abstract:
Visando atuação flexível em redes heterogêneas, modernos sistemas multimídia podem adotar o conceito da codificação escalável, onde o fluxo de vídeo é composto por múltiplas camadas, cada qual complementando e aprimorando gradualmente as características de exibição, de forma adaptativa às capacidades de cada receptor. Atualmente, a especificação H.264/SVC representa o estado da arte da área, por sua eficiência de codificação aprimorada, porém demanda recursos computacionais extremamente elevados. Neste contexto, o presente trabalho apresenta uma arquitetura de projeto colaborativo de hardware e software, que explora as características dos diversos algoritmos internos do codificador H.264/SVC, buscando um adequado balanceamento entre as duas tecnologias (hardware e software) para a implementação prática de um codificador escalável de até 16 camadas em formato de 1920x1080 pixels. A partir de um modelo do código de referência H.264/SVC, refinado para reduzir tempos de codificação, foram definidas estratégias de particionamento de módulos e integração entre entidades de software e hardware, avaliando-se questões como dependência de dados e potencial de paralelismo dos algoritmos, assim como restrições práticas das interfaces de comunicação e acessos à memória. Em hardware foram implementados módulos de transformadas, quantização, filtro anti-blocagem e predição entre camadas, permanecendo em software funções de gerência do sistema, entropia, controle de taxa e interface com usuário. A solução completa obtida, integrando módulos em hardware, sintetizados em uma placa de desenvolvimento, com o software de referência refinado, comprova a validade da proposta, pelos significativos ganhos de desempenho registrados, mostrando-se como uma solução adequada para aplicações que exijam codificação escalável tempo real.
In order to support heterogeneous networks and distinct devices simultaneously, modern multimedia systems can adopt the scalability concept, when the video stream is composed by multiple layers, each one being responsible for gradually enhance the video exhibition quality, according to specific receiver capabilities. Currently the H.264/SVC specification can be considered the state-of-art in this area, by improving the coding efficiency, but, in the other hand, impacting in extremely high computational demands. Based on that, this work presents a hardware/software co-design architecture, which explores the characteristics of H.264/SVC internal algorithms, aiming the right balancing between both technologies (hardware and software) in order to generate a practical scalable encoder implementation, able to process up to 16 layers in 1920x1080 pixels format. Based in an H.264/SVC reference code model, which was refined in order to reduce global encoding time, the approaches for module partitioning and data integration between hardware and software were defined. The proposed methodology took into account characteristics like data dependency and inherent possibility of parallelism, as well practical restrictions like influence of communication interfaces and memory accesses. Particularly, the modules of transforms, quantization, deblocking and inter-layer prediction were implemented in hardware, while the functions of system management, entropy, rate control and user interface were kept in software. The whole solution, which was obtained integrating hardware modules, synthesized in a development board, with the refined H.264/SVC reference code, validates the proposal, by the significant performance gains registered, indicating it as an adequate solution for applications which require real-time video scalable coding.
APA, Harvard, Vancouver, ISO, and other styles
10

Tarnoff, David. "Episode 4.01 – Intro to Logic Gates." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/29.

Full text
Abstract:
Logic gates are the fundamental building blocks of digital circuits. In this episode, we take a look at the four most basic gates: AND, OR, exclusive-OR, and the inverter, and show how an XOR gate can be used to compare two digital values. Click here to read the show transcript.
APA, Harvard, Vancouver, ISO, and other styles
11

Tarnoff, David. "Episode 4.04 – NAND, NOR, and Exclusive-NOR Logic." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/32.

Full text
Abstract:
The simplest combinational logic circuits are made by inverting the output of a fundamental logic gate. Despite this simplicity, these gates are vital. In fact, we can realize any truth table using a circuit made only from AND gates with inverted outputs.
APA, Harvard, Vancouver, ISO, and other styles
12

Tarnoff, David. "Episode 6.05 – Don’t Cares, the Logical Kind." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/computer-organization-design-oer/45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Nguyen, Loc Bao. "Logic design using programmable logic devices." PDXScholar, 1988. https://pdxscholar.library.pdx.edu/open_access_etds/4103.

Full text
Abstract:
The Programmable Logic Devices, PLO, have caused a major impact in logic design of digital systems in this decade. For instance, a twenty pin PLO device can replace from three hundreds to six hundreds Transistor Transistor Logic gates, which people have designed with since the 60s. Therefore, by using PLD devices, designers can squeeze more features, reduce chip counts, reduce power consumption, and enhance the reliability of the digital systems. This thesis covers the most important aspects of logic design using PLD devices. They are Logic Minimization and State Assignment. In addition, the thesis also covers a seldomly used but very useful design style, Self-Synchronized Circuits. The thesis introduces a new method to minimize Two-Level Boolean Functions using Graph Coloring Algorithms and the result is very encouraging. The raw speed of the coloring algorithms is as fast as the Espresso, the industry standard minimizer from Berkeley, and the solution is equally good. The thesis also introduces a rule-based state assignment method which gives equal or better solutions than STASH (an Intel Automatic CAD tool) by as much as twenty percent. One of the problems with Self-Synchronized circuits is that it takes many extra components to implement the circuit. The thesis shows how it can be designed using PLD devices and also suggests the idea of a Clock Chip to reduce the chip count to make the design style more attractive.
APA, Harvard, Vancouver, ISO, and other styles
14

Willingham, David John. "Asynchrobatic logic for low-power VLSI design." Thesis, University of Westminster, 2010. https://westminsterresearch.westminster.ac.uk/item/9087w/asynchrobatic-logic-for-low-power-vlsi-design.

Full text
Abstract:
In this work, Asynchrobatic Logic is presented. It is a novel low-power design style that combines the energy saving benefits of asynchronous logic and adiabatic logic to produce systems whose power dissipation is reduced in several different ways. The term “Asynchrobatic” is a new word that can be used to describe these types of systems, and is derived from the concatenation and shortening of Asynchronous, Adiabatic Logic. This thesis introduces the concept and theory behind Asynchrobatic Logic. It first provides an introductory background to both underlying parent technologies (asynchronous logic and adiabatic logic). The background material continues with an explanation of a number of possible methods for designing complex data-path cells used in the adiabatic data-path. Asynchrobatic Logic is then introduced as a comparison between asynchronous and Asynchrobatic buffer chains, showing that for wide systems, it operates more efficiently. Two more-complex sub-systems are presented, firstly a layout implementation of the substitution boxes from the Twofish encryption algorithm, and secondly a front-end only (without parasitic capacitances, resistances) simulation that demonstrates a functional system capable of calculating the Greatest Common Denominator (GCD) of a pair of 16-bit unsigned integers, which under typical conditions on a 0.35μm process, executed a test vector requiring twenty-four iterations in 2.067μs with a power consumption of 3.257nW. These examples show that the concept of Asynchrobatic Logic has the potential to be used in real-world applications, and is not just theory without application. At the time of its first publication in 2004, Asynchrobatic Logic was both unique and ground-breaking, as this was the first time that consideration had been given to operating large-scale adiabatic logic in an asynchronous fashion, and the first time that Asynchronous Stepwise Charging (ASWC) had been used to drive an adiabatic data-path.
APA, Harvard, Vancouver, ISO, and other styles
15

Nair, Vineet, and n/a. "On Extending BDI Logics." Griffith University. School of Information Technology, 2003. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20030929.095254.

Full text
Abstract:
In this thesis we extend BDI logics, which are normal multimodal logics with an arbitrary set of normal modal operators, from three different perspectives. Firstly, based on some recent developments in modal logic, we examine BDI logics from a combining logic perspective and apply combination techniques like fibring/dovetailing for explaining them. The second perspective is to extend the underlying logics so as to include action constructs in an explicit way based on some recent action-related theories. The third perspective is to adopt a non-monotonic logic like defeasible logic to reason about intentions in BDI. As such, the research captured in this thesis is theoretical in nature and situated at the crossroads of various disciplines relevant to Artificial Intelligence (AI). More specifically this thesis makes the following contributions: 1. Combining BDI Logics through fibring/dovetailing: BDI systems modeling rational agents have a combined system of logics of belief, time and intention which in turn are basically combinations of well understood modal logics. The idea behind combining logics is to develop general techniques that allow to produce combinations of existing and well understood logics. To this end we adopt Gabbay's fibring/dovetailing technique to provide a general framework for the combinations of BDI logics. We show that the existing BDI framework is a dovetailed system. Further we give conditions on the fibring function to accommodate interaction axioms of the type G [superscript k,l,m,n] ([diamond][superscript k] [superscript l] [phi] [implies] [superscript m] [diamond][superscript n] [phi]) based on Catach's multimodal semantics. This is a major result when compared with other combining techniques like fusion which fails to accommodate axioms of the above type. 2. Extending the BDI framework to accommodate Composite Actions: Taking motivation from a recent work on BDI theory, we incorporate the notion of composite actions, [pi]-1; [pi]-2 (interpreted as [pi]-1 followed by [pi]-2), to the existing BDI framework. To this end we introduce two new constructs Result and Opportunity which helps in reasoning about the actual execution of such actions. We give a set of axioms that can accommodate the new constructs and analyse the set of commitment axioms as given in the original work in the background of the new framework. 3. Intention reasoning as Defeasible reasoning: We argue for a non-monotonic logic of intention in BDI as opposed to the usual normal modal logic one. Our argument is based on Bratman's policy-based intention. We show that policy-based intention has a defeasible/non-monotonic nature and hence the traditional normal modal logic approach to reason about such intentions fails. We give a formalisation of policy-based intention in the background of defeasible logic. The problem of logical omniscience which usually accompanies normal modal logics is avoided to a great extend through such an approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Graf, Jonathan Peter. "Optimizing Programmable Logic Design Security Strategies." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/89920.

Full text
Abstract:
A wide variety of design security strategies have been developed for programmable logic devices, but less work has been done to determine which are optimal for any given design and any given security goal. To address this, we consider not only metrics related to the performance of the design security practice, but also the likely action of an adversary given their goals. We concern ourselves principally with adversaries attempting to make use of hardware Trojans, although we also show that our work can be generalized to adversaries and defenders using any of a variety of microelectronics exploitation and defense strategies. Trojans are inserted by an adversary in order to accomplish an end. This goal must be considered and quantified in order to predict the adversary's likely action. Our work here builds upon a security economic approach that models the adversary and defender motives and goals in the context of empirically derived countermeasure efficacy metrics. The approach supports formation of a two-player strategic game to determine optimal strategy selection for both adversary and defender. A game may be played in a variety of contexts, including consideration of the entire design lifecycle or only a step in product development. As a demonstration of the practicality of this approach, we present an experiment that derives efficacy metrics from a set of countermeasures (defender strategies) when tested against a taxonomy of Trojans (adversary strategies). We further present a software framework, GameRunner, that automates not only the solution to the game but also enables mathematical and graphical exploration of "what if" scenarios in the context of the game. GameRunner can also issue "prescriptions," sets of commands that allow the defender to automate the application of the optimal defender strategy to their circuit of concern. We also present how this work can be extended to adjacent security domains. Finally, we include a discussion of future work to include additional software, a more advanced experimental framework, and the application of irrationality models to account for players who make subrational decisions.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Kailiang. "Circuit design for logic automata." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52781.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 143-148).
The Logic Automata model is a universal distributed computing structure which pushes parallelism to the bit-level extreme. This new model drastically differs from conventional computer architectures in that it exposes, rather than hides, the physics underlying the computation by accommodating data processing and storage in a local and distributed manner. Based on Logic Automata, highly scalable computing structures for digital and analog processing have been developed; and they are verified at the transistor level in this thesis. The Asynchronous Logic Automata (ALA) model is derived by adding the temporal locality, i.e., the asynchrony in data exchanges, in addition to the spacial locality of the Logic Automata model. As a demonstration of this incrementally extensible, clockless structure, we designed an ALA cell library in 90 nm CMOS technology and established a "pick-and-place" design flow for fast ALA circuit layout. The work flow gracefully aligns the description of computer programs and circuit realizations, providing a simpler and more scalable solution for Application Specific Integrated Circuit (ASIC) designs, which are currently limited by global constraints such as the clock and long interconnects. The potential of the ALA circuit design flow is tested with example applications for mathematical operations. The same Logic Automata model can also be augmented by relaxing the digital states into analog ones for interesting analog computations. The Analog Logic Automata (AnLA) model is a merge of the Analog Logic principle and the Logic Automata architecture, in which efficient processing is embedded onto a scalable construction.
(cont.) In order to study the unique property of this mixed-signal computing structure, we designed and fabricated an AnLA test chip in AMI 0.5[mu]m CMOS technology. Chip tests of an AnLA Noise-Locked Loop (NLL) circuit as well as application tests of AnLA image processing and Error-Correcting Code (ECC) decoding, show large potential of the AnLA structure.
by Kailiang Chen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
18

Hadjinicolaou, M. G. "Synthesis of programmable logic arrays." Thesis, Brunel University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Marriott, Jack. "Adaptive robust fuzzy logic control design." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mukolera, J. "Logic programming in electrical machine design." Thesis, Imperial College London, 1987. http://hdl.handle.net/10044/1/47359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dadone, Paolo. "Design Optimization of Fuzzy Logic Systems." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/27893.

Full text
Abstract:
Fuzzy logic systems are widely used for control, system identification, and pattern recognition problems. In order to maximize their performance, it is often necessary to undertake a design optimization process in which the adjustable parameters defining a particular fuzzy system are tuned to maximize a given performance criterion. Some data to approximate are commonly available and yield what is called the supervised learning problem. In this problem we typically wish to minimize the sum of the squares of errors in approximating the data. We first introduce fuzzy logic systems and the supervised learning problem that, in effect, is a nonlinear optimization problem that at times can be non-differentiable. We review the existing approaches and discuss their weaknesses and the issues involved. We then focus on one of these problems, i.e., non-differentiability of the objective function, and show how current approaches that do not account for non-differentiability can diverge. Moreover, we also show that non-differentiability may also have an adverse practical impact on algorithmic performances. We reformulate both the supervised learning problem and piecewise linear membership functions in order to obtain a polynomial or factorable optimization problem. We propose the application of a global nonconvex optimization approach, namely, a reformulation and linearization technique. The expanded problem dimensionality does not make this approach feasible at this time, even though this reformulation along with the proposed technique still bears a theoretical interest. Moreover, some future research directions are identified. We propose a novel approach to step-size selection in batch training. This approach uses a limited memory quadratic fit on past convergence data. Thus, it is similar to response surface methodologies, but it differs from them in the type of data that are used to fit the model, that is, already available data from the history of the algorithm are used instead of data obtained according to an experimental design. The step-size along the update direction (e.g., negative gradient or deflected negative gradient) is chosen according to a criterion of minimum distance from the vertex of the quadratic model. This approach rescales the complexity in the step-size selection from the order of the (large) number of training data, as in the case of exact line searches, to the order of the number of parameters (generally lower than the number of training data). The quadratic fit approach and a reduced variant are tested on some function approximation examples yielding distributions of the final mean square errors that are improved (i.e., skewed toward lower errors) with respect to the ones in the commonly used pattern-by-pattern approach. Moreover, the quadratic fit is also competitive and sometimes better than the batch training with optimal step-sizes, thus showing an improved performance of this approach. The quadratic fit approach is also tested in conjunction with gradient deflection strategies and memoryless variable metric methods, showing errors smaller by 1 to 7 orders of magnitude. Moreover, the convergence speed by using either the negative gradient direction or a deflected direction is higher than that of the pattern-by-pattern approach, although the computational cost of the algorithm per iteration is moderately higher than the one of the pattern-by-pattern method. Finally, some directions for future research are identified.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Chua, Shin Cheng. "Design and synthesis of reversible logic." Thesis, Curtin University, 2016. http://hdl.handle.net/20.500.11937/1504.

Full text
Abstract:
Energy lost during computation is an important issue for digital design. Today, all electronics devices suffer from energy lost due to the conventional logic system used. The amount of energy loss in the form of heat leads to immense challenges in nowadays circuit design. To overcome that, reversible logic has been invented. Since properties of reversible logic differ greatly than conventional logic, synthesis methods used for conventional logic cannot be used in reversible logic. In this dissertation, we proposed new synthesis algorithms and several circuit designs using reversible logic.
APA, Harvard, Vancouver, ISO, and other styles
23

Khan, Shoab Ahmad. "Logic and algorithm partitioning." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chong, Kian Haur. "Self-calibrating differential output prediction logic /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ramakrishnan, Lakshmi Narasimhan. "SDMLp - Secure Differential Multiplexer Logic : Logic Design for DPA-Resistant Cryptographic Circuits." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1311691925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Xiaojun. "An interactive, high-level logic synthesis system." Thesis, Staffordshire University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Padua, C. I. P. S. "A logic synthesis approach to silicon compilation." Thesis, University of Southampton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dayantis, George. "Types, modularisation and abstraction in logic programming." Thesis, University of Sussex, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.255977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Parameswaran, Nair Ravi Sankar. "Delay-insensitive ternary logic (DITL)." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.umr.edu/thesis/pdf/Parameswaran_Nair_09007dcc803bc548.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 27, 2007) Includes bibliographical references (p. 55-56).
APA, Harvard, Vancouver, ISO, and other styles
30

Vasilko, Milan. "Design synthesis for dynamically reconfigurable logic systems." Thesis, Bournemouth University, 2000. http://eprints.bournemouth.ac.uk/291/.

Full text
Abstract:
Dynamic reconfiguration of logic circuits has been a research problem for over four decades. While applications using logic reconfiguration in practical scenarios have been demonstrated, the design of these systems has proved to be a difficult process demanding the skills of an experienced reconfigurable logic design expert. This thesis proposes an automatic synthesis method which relieves designers of some of the difficulties associated with designing partially dynamically reconfigurable systems. A new design abstraction model for reconfigurable systems is proposed in order to support design exploration using the presented method. Given an input behavioural model, a technology server and a set of design constraints, the method will generate a reconfigurable design solution in the form of a 3D floorplan and a configuration schedule. The approach makes use of genetic algorithms. It facilitates global optimisation to accommodate multiple design objectives common in reconfigurable system design, while making realistic estimates of configuration overheads and of the potential for resource sharing between configurations. A set of custom evolutionary operators has been developed to cope with a multiple-objective search space. Furthermore, the application of a simulation technique verifying the lll results of such an automatic exploration is outlined in the thesis. The qualities of the proposed method are evaluated using a set of benchmark designs taking data from a real reconfigurable logic technology. Finally, some extensions to the proposed method and possible research directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Yu-Jen. "Design of fuzzy logic controllers for FACTS." Thesis, University of Strathclyde, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Brereton, Margot Felicity. "A logic based approach to factory design." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sampson, Michael. "The strategic logic of international agreement design." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:5688f2b9-fc86-47c6-9a13-e38fdb181773.

Full text
Abstract:
Conventional wisdom suggests that weak international actors should avoid concluding ambiguous agreements with much stronger partners because this increases their vulnerability to subsequent exploitation. Why then do we observe so many instances of just such agreements signed under conditions of extreme power asymmetry? I answer this question by emphasising an underappreciated factor shaping the agreement design strategies of actors: Power trajectory. Focusing on international trade, I develop a three-part framework which demonstrates first, that powerful but rising states gain from securing narrow agreements because as the scope of these agreements is broadened, they are provided with more opportunities to use their growing power to secure increasingly favourable deals. Conversely, powerful but declining states are incentivised to conclude broad agreements as a way to lock-in an advantage that will decline over time. Second, I demonstrate that because of the particular vulnerabilities faced by weak states as a result of these narrow agreements, strong but rising powers are often required to make up-front concessions in order to secure their preferred contract and overcome the fears of their weaker counterparts. Third, I show that powerful but rising states can reap the benefits of subsequent rounds of bargaining because the initial agreement has induced the weaker party to make transaction specific investments which serve to drastically reduce its exit options. In developing this framework, I make three contributions; first, from a theoretical standpoint I specify more precisely the conditions under which powerful states choose to tie their hands and so qualify both the liberal claim that powerful states must always do so, and the realist suggestion that they strive to maintain freedom of action. Second, I make an empirical contribution by placing the trade policies of four major economic powers in detailed comparative perspective. Finally, I make a substantive contribution by demonstrating yet another mechanism by which the strong secure their preferences at the expense of the weak in international affairs.
APA, Harvard, Vancouver, ISO, and other styles
34

Kalganova, Tatiana. "Evolvable hardware design of combinational logic circuits." Thesis, Edinburgh Napier University, 2000. http://researchrepository.napier.ac.uk/Output/4341.

Full text
Abstract:
Evolvable Hardware (EHW), as an alternative method for logic design, became more attractive recently, because of its algebra-independent techniques for generating selfadaptive self-reconfigurable hardware. This thesis investigates and relates both evaluation and evolutionary processes, emphasizing the need to address problems arising from data complexity. Evaluation processes, capable of evolving cost-optimised fully functional circuits are investigated. The need for an extrinsic EHW approach (software models) independent of the concerns of any implementation technologies is emphasized. It is also shown how the function description may be adapted for use in the EHW approach. A number of issues of evaluation process are addressed: these include choice of optimisation criteria, multi-objective optimisation tedmiques in EHW and probabilistic analysis of evolutionary processes. The concept of self-adaptive extrinsic EHW method is developed. This approach emphasizes the circuit layout evolution together with circuit functionality. A chromosome representation for such system is introduced, and a number of genetic operators and evolutionary algorithms in support of this approach are presented. The genetic operators change the genetic material at the different levels of chromosome representation. Furthermore, a chromosome representation is adapted to the function-level EHW approach. As a result, the modularised systems are evolved using multi-output building blocks. This chromosome representation overcomes the problem of long string chromosome. Together, these techniques facilitate the construction of systems to evolve logic functions of large number of variables. A method for achieving this using bidirectional incremental evolution is documented. It is demonstrated that the integration of a dynamic evaluation process and self-adaptive function-level EHW approach allows the bidirectional incremental evolution to successfully evolve more complex systems than traditionally evolved before. Thereby it provides a firm foundation for the evolution of complex systems. Finally, the universality of these techniques is proved by applying them to multivalued combinational logic design. Empirical study of this application shows that there is no fundamental difference in approach for both binary and multi-valued logic design problems.
APA, Harvard, Vancouver, ISO, and other styles
35

Dara, Chandra Babu. "Design of High Performance Threshold Logic Gates." OpenSIUC, 2015. https://opensiuc.lib.siu.edu/dissertations/1188.

Full text
Abstract:
Threshold logic gates are gaining more importance in recent years due to significant development in switching devices. This renewed the interest in high performance and low power circuits with threshold logic gates. Threshold Logic Gates can be implemented using both the traditional CMOS technologies and the emerging nanoelectronic technologies. In this dissertation, we have performed performance analysis on Monostable-Bistable Threshold Logic Element based, current mode, and memristor based threshold logic implementations. Existing analytical approaches that model the delay of a Monostable-Bistable Threshold Logic Element threshold logic gate cannot explore the enormous search space in the quest of weight assignments on the inputs and threshold in order to optimize the delay of the threshold logic gate. It is shown that this can be achieved by using a quantity that depends on the constants and Resonant Tunnel Diode weights. This quantity is used to form an integer linear program that optimizes the performance and ensure that each weight can tolerate a predetermined variation by an appropriate weight assignment in a threshold logic gate. The presented experimental results demonstrate the impact of the proposed method. The optimality of our solutions and the reported improvements ensure tolerance to potential manufacturing defects. Current mode is a popular CMOS-based implementation of threshold logic functions where the gate delay depends on the sensor size. A new implementation of current mode threshold functions for improved performance and switching energy is presented. An analytical method is also proposed in order to identify quickly the optimum sensor size. Experimental results on different gates with the optimum sensor size indicate that the proposed method outperforms consistently the existing implementations, and implements high performance and low power gates that have a very large number of inputs. A new dual clocked design that uses memristors in current mode logic implementation of threshold logic gates is also presented. Memristor based designs have high potential to improve performance and energy over purely CMOS-based combinational methods. The proposed designs are clocked, and outperform a recently proposed combinational method in performance as well as energy consumption. It is experimentally verified that both designs scale well in both energy consumption as well as delay.
APA, Harvard, Vancouver, ISO, and other styles
36

Roumeliotis, Emmanuel. "Multi-processor logic simulation at the chip level." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/71180.

Full text
Abstract:
This dissertation presents the design and development of a multi-processor logic simulator. After an introduction to parallel processing, the concept of distributed simulation is described as well as the possibility of deadlock in a distributed system. It is proven that the proposed system does not deadlock. Next, the modeling techniques are discussed along with the timing mechanisms used for logic simulation. A new approach, namely process oriented simulation is studied in depth. It is shown that modeling for this kind of simulation is more efficient regarding modeling ease, computer memory and simulation time, than existing simulation methods. The hardware design of the multi-processor system and the algorithms for synchronization and signal interchange between the processors are presented next. An algorithm for an efficient partitioning of the digital network to be simulated among the processors of the system is also described. Apart from the simulation of a single digital network, the simulator can also be used for fault simulation and design verification. Regarding fault simulation, the fault injection and fault detection techniques are presented. The experimental results obtained by running the multi-processor simulator are compared with the theoretical estimates as well as with results obtained by other multi-processor systems. The comparison shows that the proposed simulator exhibits the estimated performance. Finally, the design of a common bus interface is given. This interface will connect the processors of the system directly without the intervention of a hard disk which was used for the development and testing of the system.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhou, Jing 1959. "LOVERD--a logic design verification and diagnosis system via test generation." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/291686.

Full text
Abstract:
The development of cost-effective circuits is primarily a matter of economy. To achieve it, design errors and circuit flaws must be eliminated during the design process. To this end, considerable effort must be put into all phases of the design cycle. Effective CAD tools are essential for the production of high-performance digital systems. This thesis describes a CAD tool called LOVERD, which consists of ATPG, fault simulation, design verification and diagnosis. It uses test patterns, developed to detect single stuck-at faults in the gate-level implementation, to compare the results of the functional level description and its gate-level implementation. Whenever an error is detected, the logic diagnosis tool can be used to provide useful information to designers. It is shown that certain types of design errors in combinational logic circuits can be detected and allocated by LOVERD efficiently.
APA, Harvard, Vancouver, ISO, and other styles
38

Gani, Sohail M. "A gate matrix approach to VLSI logic layout." Thesis, University of Essex, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Clarke, Christopher T. "The implementation and applications of multiple-valued logic." Thesis, University of Warwick, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Canal, Bruno. "MCML gate design methodology ante the tradeoffs between MCML and CMOS applications." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142585.

Full text
Abstract:
Este trabalho propõe uma metodologia de projeto para células digitais MOS Current-Mode Logic (MCML) e faz um estudo da utilização destes circuitos, frente à utilização de células CMOS tradicionais. MCML é um estilo lógico desenvolvido para ser utilizado em circuitos de alta frequência e tem como princípio de funcionamento o direcionamento de uma corrente de polarização através de uma rede diferencial. Na metodologia proposta o dimensionamento inicial da célula lógica é obtido a partir do modelo quadrático de transistores e através de simulações SPICE analisa-se o comportamento da célula e se redimensiona a mesma para obter as especificações desejadas. Esta metodologia considera que todos os pares diferencias da rede de pull-down possuem o mesmo dimensionamento. O objetivo através desta metodologia é encontrar a melhor frequência de operação para uma dada robustez da célula digital. Dimensionamos células lógicas MCML de até três entradas para três tecnologias (XFAB XC06, IBM130 e PTM45). Comparamos os resultados da metodologia proposta com o software comercial de otimização de circuitos, Wicked™, o qual obteve uma resposta de atraso 20% melhor no caso da tecnologia XFAB XC06 e 3% no caso do processo IBM130. Através de simulações de osciladores em anel, demonstramos que a topologia MCML apresenta vantagens sobre as células digitais CMOS estáticas, em relação à dissipação de potência quando utilizada em circuitos de alta frequência e caminhos de baixa profundidade lógica. Também demonstramos, através de divisores de frequência, que estes circuitos quando feitos na topologia MCML podem atingir frequências de operação que em geral são o dobro das apresentadas em circuitos CMOS, além do mais atingem este desempenho com uma dissipação de potência menor que circuitos CMOS. A natureza analógica das células MCML as torna susceptíveis às variações de processo. Variações globais são compensadas pelo aumento dos transistores da PDN, já casos de descasamentos, por não terem um método de compensação, acabam por degradar a confiabilidade do circuito. Na avaliação da área ocupada por célula, a topologia MCML mostrou consumir mais área do que a topologia CMOS.
This work proposes a simulation-based methodology to design MOS Current-Mode Logic (MCML) gates and addresses the tradeoffs of the MCML versus static CMOS circuits. MCML is a design style developed focusing in a high-speed logic circuit. This logic style works with the principle of steering a constant bias current through a fully differential network of input transistors. The proposed methodology uses the quadratic transistor model to find the first design solution, through SPICE simulations, make decisions and resizes the gate to obtain the required solution. The method considers a uniform sizing of the pull-down network transistors. The target solution is the best propagation delay for a predefined gate noise margin. We design MCML gates for three different process technologies (XFAB XC06, IBM130 and PTM45), considering gates up to three inputs. We compare the solutions of the proposed methodology against commercial optimization software, Wicked™, that considers different sizing for PDN differential pairs. The solutions of the software results in a 20% of improvement, when compared to the proposed methodology, in the worst case input delay for the XFAB XC06 technology, and 3% in IBM130. We demonstrate through ring oscillators simulations that MCML gates are better for high speed and small logic path circuits when compared to the CMOS static gates. Moreover, by using MCML frequency dividers we obtained a maximum working frequency that almost doubles the frequency achieved by CMOS frequency dividers, dissipating less power than static CMOS circuits. We demonstrate through a reliability analysis that the analog behavior of MCML gates makes them susceptible to PVT variations. The global variations are compensated by the bias control circuits and with the increase of the PDN transistor width. This procedure compensates the gain loss of these transistors in a worst case variation. In other hand, this increasing degrades the propagation delay of the gates. The MCML gates reliability is heavily affected by the mismatching effects. The difference of the mirrored bias current and the mismatching of the differential pairs and the PUN degrade the design yield. The results of the layout extracted simulations demonstrate that MCML gates performs a better propagation delay performance over gates that depend on complexes pull-up networks in standard CMOS implementation, as well as multi-stages static CMOS gates. Considering the gate layout implementation we demonstrate that the standard structures of pull-up and bias current mirror present in the gate are prejudicial for the MCML gate area.
APA, Harvard, Vancouver, ISO, and other styles
41

Srinivasan, Venkataramanujam. "Gigahertz-Range Multiplier Architectures Using MOS Current Mode Logic (MCML)." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/9643.

Full text
Abstract:
The tremendous advancement in VLSI technologies in the past decade has fueled the need for intricate tradeoffs among speed, power dissipation and area. With gigahertz range microprocessors becoming commonplace, it is a typical design requirement to push the speed to its extreme while minimizing power dissipation and die area. Multipliers are critical components of many computational intensive circuits such as real time signal processing and arithmetic systems. The increasing demand in speed for floating-point co-processors, graphic processing units, CDMA systems and DSP chips has shaped the need for high-speed multipliers. The focus of our research for modern digital systems is two fold. The first one is to analyze a relatively unexplored logic style called MOS Current Mode Logic (MCML), which is a promising logic technique for the design of high performance arithmetic circuits with minimal power dissipation. The second one is to design high-speed arithmetic circuits, in particular, gigahertz-range multipliers that exploit the many attractive features of the MCML logic style. A small library of MCML gates that form the core components of the multiplier were designed and optimized for high-speed operation. The three 8-bit MCML multiplier architectures designed and simulated in TSMC 0.18 mm CMOS technology are: 3-2-tree architecture with ripple carry adder (Architecture I), 4-2-tree design with ripple carry adder (Architecture II) and 4-2-tree architecture with carry look-ahead adders (Architecture III). Architecture I operates with a maximum throughput of 4.76 GHz (4.76 Billion multiplications per second) and a latency of 3.78 ns. Architecture II has a maximum throughput of 3.3 GHz and a latency of 3 ns and Architecture III has a maximum throughput of 2 GHz and a latency of 3 ns. Architecture I achieves the highest throughput among the three multipliers, but it incurs the largest area and latency, in terms of clock cycle count as well as absolute delay. Although it is difficult to compare the speed of our multipliers with existing ones, due to the use of different technologies and different optimization goals, we believe our multipliers are among the fastest found in contemporary literature.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
42

Zhuang, Nan. "Logic synthesis and technology mapping using genetic algorithms." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Luria, David M. "Logic Encryption for Resource Constrained Designs." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613742372174729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Midde, Bharath Reddy. "Design, analysis, and synthesis of 16 bit arithmetic logic unit using reversible logic gate." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10099864.

Full text
Abstract:

In the modern world, an Arithmetic Logic Unit (ALU) is one of the most crucial component of an embedded system and is used in many devices like calculators, cell phones, computers, and so on. An ALU is a multi-functional circuit that conditionally performs one of several possible functions on two operands A and B depending on control inputs. It is nevertheless the main performer of any computing device. This project proposes the design of programmable reversible logic gate structures, targeted for the ALU implementation and their use in the realization of an efficient reversible ALU. This ALU consists of sixteen operations, the arithmetic operations include addition, subtraction, multiplication and the logical operations includes AND, OR, NOT and XOR. All the modules are being designed using the basic reversible gates.

Using reversible logic gates instead of traditional logic AND/OR gates, a reversible ALU is constructed whose function is the same as traditional ALU. Comparing with the number of input bits and the discarded bits of the traditional ALU, the reversible ALU significantly reduces the use and loss of information bits. The proposed reversible 16-bit ALU reuses the information bits and achieves the goal of lowering delay of logic circuits by 42% approximately. Programmable reversible logic gates are realized in Verilog HDL.

APA, Harvard, Vancouver, ISO, and other styles
45

Walder, Herbert H. "Operating system design for partially reconfigurable logic devices /." Zürich : Institut für Technische Informatik und Kommunikationsnetze TIK, Eidgenössische Technische Hochschule ETH Zürich, 2005. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Vural, Ozgur Ahmet. "Fuzzy Logic Guidance System Design For Guided Missiles." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1026715/index.pdf.

Full text
Abstract:
This thesis involves modeling, guidance, control, and flight simulations of a canard controlled guided missile. The autopilot is designed by a pole placement technique. Designed autopilot is used with the guidance systems considered in the thesis. Five different guidance methods are applied in the thesis, one of which is the famous proportional navigation guidance. The other four guidance methods are different fuzzy logic guidance systems designed considering different types of guidance inputs. Simulations are done against five different target types and the performances of the five guidance methods are compared and discussed.
APA, Harvard, Vancouver, ISO, and other styles
47

Henninen, Svein Rypdal. "Application of asynchronous design to microcontroller startup logic." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-16349.

Full text
Abstract:
Digital circuits designed today are almost exclusively clocked. As designs grow in size it becomes harder to effectively distribute the various clock signals over the circuit. The clock is also a big contribution to the power consumption of a circuit. Some work is being done to provide alternatives to standard synchronous design. One of these alternatives is the Balsa system.Several versions of an asynchronous module for controlling the startup process of a microcontroller was made in Balsa and compared to a standard synchronous implementation. Area estimates for the best asynchronous implementation gives a number that is a factor of over four larger than for the synchronous implementation. The asynchronous implementation has other advantages though. It has no dynamic power consumption when it is in a stable state. Additionally it can operate closer to the sub-threshold area.The asynchronous implementations have been tested and found working in active HDL. Balsa generated verilog netlists in a 350 nm library from the balsa language description. Design Compiler from Synopsys was used to get the area estimates. The asynchronous implementations shows potential, especially with regards to reduced power consumption.
APA, Harvard, Vancouver, ISO, and other styles
48

Tomczuk, Randal Wade. "Autocorrelation and decomposition methods in combinational logic design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21952.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Valdés, Francisco. "Design of a fuzzy logic software estimation process." Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/983/1/VALD%C3%89S_Francisco.pdf.

Full text
Abstract:
Cette recherche décrit la conception d'un processus avec logique floue pour l'estimation des projets de logiciels. Il y a des études qui montrent que la plupart des projets de logiciels excèdent leur budget ou dépassent leur calendrier prévu, et ce même si depuis des années les organisations font des efforts pour augmenter le taux de réussite des projets de logiciels en rendant le processus plus facile à gérer et, par conséquent, plus prévisible. L'estimation du projet est un enjeu important, car c'est la base pour quantifier, allouer et gérer les ressources nécessaires à un projet. Lorsque les estimations de projets logiciels ne sont pas effectuées correctement, les organisations font face un risque élevé dans leurs projets et cela peut mener à des pertes pour l'organisation au lieu des profits prévus et justifiant le démarrage des projets. Les estimations les plus importants doivent être effectuées au début du cycle de développement (i.e. à la phase de conceptualisation des projets): à ce moment là, l'information est disponible seulement à un niveau très élevé d'abstraction, et souvent elle est fondée sur un certain nombre d'hypothèses non vérifiables. L'approche généralement utilisée pour estimer les projets dans l'industrie du logiciel est celle basée sur l'expérience des employés dans l'organisation, aussi nommée l’appoche par ‘jugement d'experts’. Bien sûr, il y a un certain nombre de problématiques reliées à l’utilisation de ces jugements d’experts en estimation: par exemple, les hypothèses sont implicites et l'expérience est fortement liée aux experts et non pas à l'organisation. Le but de recherche de cette thèse était de concevoir un processus d'estimation de projets de logiciels capable de tenir compte du manque d'informations détaillées et quantitatives dans les premières phases du cycle de vie du développement logiciel. La stratégie choisie pour cette recherche tire partie des avantages de l'approche fondée sur l'expérience qui peut être utilisée dans les phases précoces de l'estimation de projets de logiciels, tout en tenant compte de certains des problèmes majeurs générés par cette méthode d'estimation par jugements d’experts. La logique floue a été proposée comme approche de recherche parce que c'est une façon formelle pour gérer l'incertitude et les variables linguistiques disponibles dans les premières phases d’un projet de développement d’un logiciel: un système à base de logique floue permet d’acquérir l'expérience de l'organisation par l'intermédiaire des experts et de leurs définitions de règles d'inférence. Les objectifs de recherche spécifiques à atteindre par ce processus d'estimation améliorée sont: A. Le processus d'estimation proposé doit utiliser des techniques pertinentes pour gérer l'incertitude et l'ambiguïté, comme le font les practiciens lorqu’ils utilisent leur ‘jugement d’experts’ en estimation de projets logiciel: le processus d'estimation proposé doit utiliser les variables utilisées par les praticiens. B. Le processus d'estimation proposé doit être utile à un stade précoce du processus de développement logiciel. C. Le processus d'estimation proposée doit préserver l'expérience (ou la base de connaissances) pour l'organisation et inclure un mécanisme facile pour définir l'expérience des experts. D. Le modèle proposé doit être utilisable par des personnes avec des compétences distinctes de celles des ‘experts’ qui définissent le contexte d'origine du modèle d’estimation proposé. E. Pour l'estimation dans le contexte des premières phases, un processus d'estimation fondé sur la logique floue a été proposée, soit : ‘Estimation of Projects in a Context of Uncertainty - EPCU’’. Une caractéristique importante de cette thèse est l’utilisation, pour fin d’expérimentation et de vérification, d’informations provenant de projets provenant de l’industrie au Mexique. La phase d'expérimentation comprend trois scénarios: Scénario A. Le processus d’estimation proposé doit utiliser les techniques pertinentes pour une gestion de l’incertitude et de l’ambiguité afin de faciliter la tache aux intéressés de réaliser ses estimations. Ce processus doit prende en compte les variables que les intéressés utilisent. Scénario B. Ce scénario est similaire au scénario A, sauf qu’il s’agit de projets en démarrage, et pour lesquels les valeurs finales de durée et de coûts ne sont pas disponibles pour fin de comparaison. Scénario C. Afin de remédier au manque d'informations par rapport au scénario B, le scénario C consiste en une expérience de simulation. Ces expérimentations ont permis de conclure que compte tenu des projets examinés dans les 3 scénarios, l'utilisation du processus d'estimation défini – EPCU - permet d’obtenir de meilleurs résultats que l'approche par opinions d'experts et peut être utilisée pour l'estimation précoce des projets de logiciels avec de bons résultats. Afin de gérer la quantité de calculs requis par le modèle d’estimation EPCU et pour l'enregistrement et la gestion des informations générées par ce modèle EPCU, un outil logiciel a été conçu et développé comme prototype de recherche pour effectuer les calculs nécessaires.
APA, Harvard, Vancouver, ISO, and other styles
50

Kotiyal, Saurabh. "Design Methodologies for Reversible Logic Based Barrel Shifters." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4106.

Full text
Abstract:
The reversible logic has the promising applications in emerging computing paradigm such as quantum computing, quantum dot cellular automata, optical computing, etc. In reversible logic gates there is a unique one-to-one mapping between the inputs and outputs. To generate an useful gate function the reversible gates require some constant ancillary inputs called ancilla inputs. Also to maintain the reversibility of the circuits some additional unused outputs are required that are referred as the garbage outputs. The number of ancilla inputs, number of garbage outputs and quantum cost plays an important role in the evaluation of reversible circuits. Thus minimizing these parameters are important for designing an efficient reversible circuit. Barrel shifter is an integral component of many computing systems due to its useful property that it can shift and rotate multiple bits in a single cycle. The main contribution of this thesis is a set of design methodologies for the reversible realization of reversible barrel shifters where the designs are based on the Fredkin gate and the Feynman gate. The Fredkin gate can implement the 2:1 MUX with minimum quantum cost, minimum number of ancilla inputs and minimum number of garbage outputs and the Feynman gate can be used so as to avoid the fanout, as fanout is not allowed in reversible logic. The design methodologies considered in this work targets 1.) Reversible logical right- shifter, 2.) Reversible universal right shifter that supports logical right shift, arithmetic right shift and the right rotate, 3.) Reversible bidirectional logical shifter, 4.) Reversible bidirectional arithmetic and logical shifter, 5) Reversible universal bidirectional shifter that supports bidirectional logical and arithmetic shift and rotate operations. The proposed design methodologies are evaluated in terms of the number of the garbage outputs, the number of ancilla inputs and the quantum cost. The detailed architecture and the design of a (8,3) reversible logical right-shifter and the (8,3) reversible universal right shifter are presented for illustration of the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography