Siga este link para ver outros tipos de publicações sobre o tema: Interpretable By Design Architectures.

Teses / dissertações sobre o tema "Interpretable By Design Architectures"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Interpretable By Design Architectures".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Jeanneret, Sanmiguel Guillaume. "Towards explainable and interpretable deep neural networks". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC229.

Texto completo da fonte
Resumo:
Les architectures neuronales profondes ont démontré des résultats remarquables dans diverses tâches de vision par ordinateur. Cependant, leur performance extraordinaire se fait au détriment de l'interprétabilité. En conséquence, le domaine de l'IA explicable a émergé pour comprendre réellement ce que ces modèles apprennent et pour découvrir leurs sources d'erreur. Cette thèse explore les algorithmes explicables afin de révéler les biais et les variables utilisés par ces modèles de boîte noire dans le contexte de la classification d'images. Par conséquent, nous divisons cette thèse en quatre parties. Dans les trois premiers chapitres, nous proposons plusieurs méthodes pour générer des explications contrefactuelles. Tout d'abord, nous incorporons des modèles de diffusion pour générer ces explications. Ensuite, nous lions les domaines de recherche des exemples adversariaux et des contrefactuels pour générer ces derniers. Le suivant chapitre propose une nouvelle méthode pour générer des contrefactuels en mode totalement boîte noire, c'est-à-dire en utilisant uniquement l'entrée et la prédiction sans accéder au modèle. La dernière partie de cette thèse concerne la création de méthodes interprétables par conception. Plus précisément, nous étudions comment étendre les transformeurs de vision en architectures interprétables. Nos méthodes proposées ont montré des résultats prometteurs et ont avancé la frontière des connaissances de la littérature actuelle sur l'IA explicable
Deep neural architectures have demonstrated outstanding results in a variety of computer vision tasks. However, their extraordinary performance comes at the cost of interpretability. As a result, the field of Explanable AI has emerged to understand what these models are learning as well as to uncover their sources of error. In this thesis, we explore the world of explainable algorithms to uncover the biases and variables used by these parametric models in the context of image classification. To this end, we divide this thesis into four parts. The first three chapters proposes several methods to generate counterfactual explanations. In the first chapter, we proposed to incorporate diffusion models to generate these explanations. Next, we link the research areas of adversarial attacks and counterfactuals. The next chapter proposes a new pipeline to generate counterfactuals in a fully black-box mode, \ie, using only the input and the prediction without accessing the model. The final part of this thesis is related to the creation of interpretable by-design methods. More specifically, we investigate how to extend vision transformers into interpretable architectures. Our proposed methods have shown promising results and have made a step forward in the knowledge frontier of current XAI literature
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Kumar, Rakesh. "Holistic design for multi-core architectures". Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3222991.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed September 20, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 182-193).
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Poyias, Kyriakos. "Design-by-contract for software architectures". Thesis, University of Leicester, 2014. http://hdl.handle.net/2381/28924.

Texto completo da fonte
Resumo:
We propose a design by contract (DbC) approach to specify and maintain architectural level properties of software. Such properties are typically relevant in the design phase of the development cycle but may also impact the execution of systems. We give a formal framework for specifying software architectures (and their refi nements) together with contracts that architectural con figurations abide by. In our framework, we can specify that if an architecture guarantees a given pre- condition and a refi nement rule satisfi es a given contract, then the refi ned architecture will enjoy a given post-condition. Methodologically, we take Architectural Design Rewriting (ADR) as our architectural description language. ADR is a rule-based formal framework for modelling (the evolution of) software architectures. We equip the recon figuration rules of an ADR architecture with pre- and post-conditions expressed in a simple logic; a pre-condition constrains the applicability of a rule while a post-condition specifi es the properties expected of the resulting graphs. We give an algorithm to compute the weakest precondition out of a rule and its post-condition. Furthermore, we propose a monitoring mechanism for recording the evolution of systems after certain computations, maintaining the history in a tree-like structure. The hierarchical nature of ADR allows us to take full advantage of the tree-like structure of the monitoring mechanism. We exploit this mechanism to formally defi ne new rewriting mechanisms for ADR reconfi guration rules. Also, by monitoring the evolution we propose a way of identifying which part of a system has been a ffected when unexpected run-time behaviours emerge. Moreover, we propose a methodology that allows us to select which rules can be applied at the architectural level to reconfigure a system so to regain its architectural style when it becomes compromised by unexpected run-time recon figurations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Shao, Yakun. "Design and Modeling of Specialized Architectures". Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:33493560.

Texto completo da fonte
Resumo:
Hardware acceleration in the form of customized datapath and control circuitry tuned to specific applications has gained popularity for its promise to utilize transistors more efficiently. However, architectural research in the area of specialization architectures is still in its preliminary stages. A major obstacle for such research has been the lack of an architecture-level infrastructure that analyzes and quantifies the benefits and trade-offs across different designs options. Existing accelerator design primarily relies on creating Register-Transfer Level (RTL) implementations, a tedious and time-consuming process, making early-stage, design space exploration for specialized architecture designs infeasible. This dissertation presents the case for early-stage, architecture-level design method- ologies in specialized architecture design and modeling. Starting with workload characterization, the proposed ISA-independent workload characterization approach demonstrates its capability to capture application’s intrinsic characteristics without being biased due to micro-architecture and ISA artifacts. Moreover, to speed up the accelerator design process, this dissertation presents a new modeling methodology for quickly and accurately estimating accelerator power, performance, and area without RTL generation. Aladdin, as a working example of this methodology, is 100× faster than the existing RTL-based simulation, and yet maintains accuracy within 7% of RTL implementations. Finally, accelerators are only part of the entire System on a Chip (SoC). To accurately capture the interactions across CPUs, accelerators, and shared resources, we developed an integrated SoC simulator based on Aladdin to enable system architects to study system-level ramifications of accelerator integration. The techniques presented in this thesis demonstrate some initial steps towards early-stage, architecture-level infrastructures for specialized architectures. We hope that this work, and the other research in the area of accelerator modeling and design, will open up the field of specialized architectures to a wider range of researchers, unlocking new opportunities for efficient accelerator design.
Engineering and Applied Sciences - Computer Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Davies, Daniel. "Representation of multiple engineering viewpoints in Computer Aided Design through computer-interpretable descriptive markup". Thesis, University of Bath, 2008. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488893.

Texto completo da fonte
Resumo:
The aim of this work was to find a way of representing multiple interpretations of a product design with the same CAD model and in a way that allowed reduction of the manual work of producing the viewpoint specific models of the product through automation The approach presented is the recording of multiple viewpoint-interpretations of a product design with a CAD product model using descriptive, by-reference (stand-off) computer interpretable markup of the model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ippolito, Corey A. "Software architectures for flight simulation". Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15749.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Nuzman, Joseph. "Memory subsystem design for explicit multithreading architectures". College Park, Md. : University of Maryland, 2003. http://hdl.handle.net/1903/146.

Texto completo da fonte
Resumo:
Thesis (M.S.) -- University of Maryland, College Park, 2003.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Smith, Richard Bartlett. "Design and integrity of deterministic system architectures". Thesis, University College London (University of London), 2007. http://discovery.ucl.ac.uk/1445115/.

Texto completo da fonte
Resumo:
Architectures represented by system construction 'building block' components and interrelationships provide the structural form. This thesis addresses processes, procedures and methods that support system design synthesis and specifically the determination of the integrity of candidate architectural structures. Particular emphasis is given to the structural representation of system architectures, their consistency and functional quantification. It is a design imperative that a hierarchically decomposed structure maintains compatibility and consistency between the functional and realisation solutions. Complex systems are normally simplified by the use of hierarchical decomposition so that lower level components are precisely defined and simpler than higher-level components. To enable such systems to be reconstructed from their components, the hierarchical construction must provide vertical intra-relationship consistency, horizontal interrelationship consistency, and inter-component functional consistency. Firstly, a modified process design model is proposed that incorporates the generic structural representation of system architectures. Secondly, a system architecture design knowledge domain is proposed that enables viewpoint evaluations to be aggregated into a coherent set of domains that are both necessary and sufficient to determine the integrity of system architectures. Thirdly, four methods of structural analysis are proposed to assure the integrity of the architecture. The first enables the structural compatibility between the 'building blocks' that provide the emergent functional properties and implementation solution properties to be determined. The second enables the compatibility of the functional causality structure and the implementation causality structure to be determined. The third method provides a graphical representation of architectural structures. The fourth method uses the graphical form of structural representation to provide a technique that enables quantitative estimation of performance estimates of emergent properties for large scale or complex architectural structures. These methods have been combined into a procedure of formal design. This is a design process that, if rigorously executed, meets the requirements for reconstructability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Roomi, Akeel S. "Multiprocessor computer architectures : algorithmic design and applications". Thesis, Loughborough University, 1989. https://dspace.lboro.ac.uk/2134/10872.

Texto completo da fonte
Resumo:
The contents of this thesis are concerned with the implementation of parallel algorithms for solving partial differential equations (POEs) by the Alternative Group EXplicit (AGE) method and an investigation into the numerical inversion of the Laplace transform on the Balance 8000 MIMO system. Parallel computer architectures are introduced with different types of existing parallel computers including the Data-Flow computer and VLSI technology which are described from both the hardware and implementation points of view. The main characteristics of the Sequent parallel computer system at Loughborough University is presented, and performance indicators, i.e., the speed-up and efficiency factors are defined for the measurement of parallelism in the system. Basic ideas of programming such computers are also outlined.....
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Dasgupta, Sohini. "Formal design and synthesis of GALS architectures". Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446196.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Lee, Andrew Sui Tin. "Design of future optical ring network architectures". Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415308.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Ibrahim, Jihad E. "Algorithms and Architectures for UWB Receiver Design". Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26105.

Texto completo da fonte
Resumo:
Impulse-based Ultra Wideband (UWB) radio technology has recently gained significant research attention for various indoor ranging, sensing and communications applications due to the large amount of allocated bandwidth and desirable properties of UWB signals (e.g., improved timing resolution or multipath fading mitigation). However, most of the applications have focused on indoor environments where the UWB channel is characterized by tens to hundreds of resolvable multipath components. Such environments introduce tremendous complexity challenges to traditional radio designs in terms of signal detection and synchronization. Additionally, the extremely wide bandwidth and shared nature of the medium means that UWB receivers must contend with a variety of interference sources. Traditional interference mitigation techniques are not amenable to UWB due to the complexity of straight-forward translations to UWB bandwidths. Thus, signal detection, synchronization and interference mitigation are open research issues that must be met in order to exploit the potential benefits of UWB systems. This thesis seeks to address each of these three challenges by first examining and accurately characterizing common approaches borrowed from spread spectrum and then proposing new methods which provide an improved trade-off between complexity and performance.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

BOZZOLI, LUDOVICA. "New Design Techniques for Dynamic Reconfigurable Architectures". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2934684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Ismailoglu, Ayse Neslin. "Asynchronous Design Of Systolic Array Architectures In Cmos". Phd thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609443/index.pdf.

Texto completo da fonte
Resumo:
In this study, delay-insensitive asynchronous circuit design style has been adopted to systolic array architectures to exploit the benefits of both techniques for improved throughput. A delay-insensitivity verification analysis method employing symbolic delays is proposed for bit-level pipelined asynchronous circuits. The proposed verification method allows datadependent early output evaluation to co-exist with robust delay-insensitive circuit behavior in pipelined architectures such as systolic arrays. Regardless of the length of the pipeline, delay-insensitivity verification of a systolic array with early output evaluation paths in onedimension is reduced to analysis of three adjacent systoles for eight possible early/late output evaluation scenarios. Analyzing both combinational and sequential parts concurrently, delay-insensitivity violations are located and corrected at structural level, without diminishing the early output evaluation benefits. Since symbolic delays are used without imposing any timing constraints on the environment
the method is technology independent and robust against all physical and environmental variations. To demonstrate the verification method, adders are selected for being at the core of data processing systems. Two asynchronous adder topologies in the delay-insensitive dual-rail threshold logic style, having data-dependent early carry evaluation paths, are converted into bit-level pipelined systolic arrays. On these adders, data-dependent delay-insensitivity violations are detected and resolved using the proposed verification technique. The modified adders achieved the targeted O(log2n) average completion time and -as a result of bit-level pipelining- nearly constant throughput against increased bit-length. The delay-insensitivity verification method could further be extended to handle more early output evaluation paths in multi-dimension.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Schlutz, Jürgen [Verfasser]. "Conceptual Design of Lunar Exploration Architectures / Jürgen Schlutz". München : Verlag Dr. Hut, 2012. http://d-nb.info/1022535749/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

McGovern, Brian Patrick. "The systematic design of VLSI signal processing architectures". Thesis, Queen's University Belfast, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333841.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Perreault, David John. "Design and evaluation of cellular power converter architectures". Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10452.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 155-160).
by David John Perreault.
Ph.D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ahmed, Mohamed Hassan Abouelella. "Power Architectures and Design for Next Generation Microprocessors". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/103175.

Texto completo da fonte
Resumo:
With the rapid increase of cloud computing and the high demand for digital content, it is estimated that the power consumption of the IT industry will reach 10 % of the total electric power in the USA by 2020. Multi-core processors (CPUs) and graphics processing units (GPUs) are the key elements in fulfilling all of the digital content requirements, but come with a price of more power-hungry processors, driving the power per server rack to 20 KW levels. The need for more efficient power management solutions on the architecture level, down to the converter level, is inevitable. Recently, data centers have replaced the 12V DC server rack distribution with a 48V DC distribution, producing a significant overall system efficiency improvement. However, 48V rack architecture raises significant challenges for the voltage regulator modules (VRMs) required for powering the processor. The 48V VRM in the vicinity of the CPU needs to be designed with very high efficiency, high power density, high light-load efficiency, as well as meet all transient requirements by the CPU and GPU. Transferring the well-developed multi-phase buck converter used in the 12V VRM to the 48V distribution platform is not that simple. The buck converter operating with 48V, stepping down to sub 2V, will be subjected to significant switching related loss, resulting in lower overall system efficiency. These challenges drive the need to look for more efficient architectures for 48V VRM solutions. Two-stage conversions can help solve the design challenges for 48V VRMs. A first-stage unregulated converter is used to step-down the 48V to a specific intermediate bus voltage. This voltage will feed a multi-phase buck converter that powers the CPU. An unregulated LLC converter is used for the first-stage converter, with zero voltage switching (ZVS) operation for the primary side switches, and zero current switching (ZCS) along with ZVS operation, for the secondary side synchronous rectifiers (SRs). The LLC converter can operate at high frequency, in order to reduce the magnetic components size, while achieving high-efficiency. The high-efficiency first-stage, along with the scalability and high bandwidth control of the second-stage, allows this architecture to achieve high-efficiency and power density. This architecture is simpler to adopt by industry, by plugging the unregulated converter before the existing multi-phase buck converters on today's platforms. The first challenge for this architecture is the transformer design of the first-stage LLC converter. It must avoid all of the loss associated with high frequency operations, and still achieve high power density without scarifying efficiency. In this thesis, the integrated matrix transformer structure is optimized by SR integration with windings, interleaved primary side termination, and a better PCB winding arrangement to achieve high-efficiency and power density, and minimize the losses associated with high-frequency operations. The second challenge is the light load efficiency improvement. In this thesis a light load efficiency improvement is proposed by a dynamic change of the intermediate bus voltage, resulting in more than 8 % light load efficiency improvements. The third challenge is the selection of the optimal bus voltage for the two-stage architecture. The impact of different bus voltages was analyzed in order to maximize the overall conversion efficiency. Multiple 48V unregulated converters were designed with maximum efficiency >98 %, and power densities >1000 W/in3, with different output voltages, to select the optimal bus voltage for the two-stage VRM. Although the two-stage VRM is more scalable and simpler to design and adopt by current industry, the efficiency will reduce as full power flows in two cascaded DC/DC converters. Single-stage conversion can achieve higher-efficiency and power-density. In this thesis, a quasi-parallel Sigma converter is proposed for the 48V VRM application. In this structure, the power is shared between two converters, resulting in higher conversion efficiency. With the aid of an optimized integrated magnetic design, a Sigma converter suitable for narrow voltage range applications was designed with 420 W/in3 and a maximum efficiency of 94 %. Later, another Sigma converter suitable for wide voltage range applications was designed with 700W/in3 and a maximum efficiency of 95 %. Both designs can achieve higher efficiency than the two-stage VRM and all other state-of-art solutions. The challenges associated with the Sigma converter, such as startup and closed loop control were addressed, in order to make it a viable solution for the VRM application. The 48V rack architecture requires regulated 12V output converters for various loads. In this thesis, a regulated LLC is used to design a high-efficiency and power-density 48V bus converter. A novel integration method of the inductor and transformer helps the LLC achieve the required regulation capability with minimum losses, resulting in a converter that can provide 1KW of continuous power with efficiency of 97.8 % and 700 W/in3 power density. This dissertation discusses new power architectures with an optimized design for the 48V rack architectures. With the academic contributions in this dissertation, different conversion architectures can be utilized for 48V VRM solutions that solve all of the challenges associated with it, such as scalability, high-efficiency, high density, and high BW control.
Doctor of Philosophy
With the rapid increase of cloud computing and the high demand for digital content, it is estimated that the power consumption of the IT industry will reach 10 % of the total electric power in the USA by 2020. Multi-core processors (CPUs) and graphics processing units (GPUs) are the key elements in fulfilling all of the digital content requirements but come with a price of more power-hungry processors, driving the power per server rack to 20 KW levels. The need for more efficient power management solutions on the architecture level, down to the converter level, is inevitable. The data center manufacturers have recently adopted a more efficient architecture that supplies a 48V DC server rack distribution instead of a 12V DC distribution to the server motherboard. This helped reduce costs and losses, but as a consequence, raised a challenge in the design of the DC/DC voltage regulator modules (VRM) supplied by the 48V, in order to power the CPU and GPU. In this work, different architectures will be explored for the 48V VRM, and the trade-off between them will be evaluated. The main target is to design the VRM with very high-efficiency and high-power density to reduce the cost and size of the CPU/GPU motherboards. First, a two-stage power conversion structure will be used. The benefit of this structure is that it relies on existing technology using the 12V VRM for powering the CPU. The only modification required is the addition of another converter to step the 48V to the 12V level. This architecture can be easily adopted by industry, with only small modifications required on the system design level. Secondly, a single-stage power conversion structure is proposed that achieves higher efficiency and power density compared to the two-stage approach; however, the structure is very challenging to design and to meet all requirements by the CPU/GPU applications. All of these challenges will be addressed and solved in this work. The proposed architectures will be designed using an optimized magnetic structure. These structures achieve very high efficiency and power density in their designed architectures, compared to state-of-art solutions. In addition, they can be easily manufactured using automated manufacturing processes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Chan, Yi-Tsu. "Design and Construction of Metallo-Supramolecular Terpyridine Architectures". University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1285698309.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Liang, Heyi. "Rational Design of Soft Materials through Chemical Architectures". University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1573085345744325.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Teodorov, Ciprian. "Model-driven physical design for future nanoscale architectures". Brest, 2011. http://www.theses.fr/2011BRES2050.

Texto completo da fonte
Resumo:
Actuellement, comme la technologie CMOS arrive à ses limites, plusieurs alternatives architecturales nanométriques sont étudiées. Ces architectures partagent des caractéristiques communes, comme par exemple la régularité d’assemblage, qui contraint le placement de dispositifs physiques à des motifs réguliers. Par conséquence, les activités de recherche dans ce domaine sont focalisées autour des structures régulières similaires, d’un point de vue conceptuel, aux architectures reconfigurables de type PLA et FPGA. Parmi ces différents travaux, on peut citer CMOL, FPNI, NASIC. Ces prototypes architecturaux sont conçus pour répondre à des contraintes de fabrication et incluent des politiques de tolérance aux défauts. Par contre, il manque la possibilité d’exploiter ces expériences et d’offrir une solution qui, en capitalisant les résultats obtenus, puisse offrir une infrastructure unique pour les futures recherches dans ce domaine. Ceci est vrai surtout au niveau du flot de conception physique ciblant l’automatisation du processus de création de circuit. Le partage de métriques, outils et supports d’exploration est le futur défi de la communauté nano-électronique. On répond à ce problème en proposant un flot de conception physique, reposant sur une méthodologie de développement dirigé par les modèles, qui factorise les concepts métiers et réifie les éléments du flot de conception. Nous avons utilisé ce flot pour explorer l’espace de conception d’une nouvelle architecture nano-métrique et on a montré qu’une telle démarche permet la convergence du processus de conception à l’aide de fréquentes évaluations quantitatives. De plus, cette méthodologie permet l’évolution incrémentielle de l’architecture et du flot de conception
In the context where the traditional CMOS technology approaches its limits, some nanowire-based fabric proposals emerged, which all exhibit some common key characteristics. Among these, their bottom-up fabrication process leads to a regularity of assembly, which means the end of custom-made computational fabrics in favor of regular structures. Hence, research activities in this area, focus on structures conceptually similar to today’s reconfigurable PLA and/or FPGA architectures. A number of different fabrics and architectures are currently under investigation, e. G. CMOL , FPNI, NASIC. These proof-of-concept architectures take into account sortie fabrication constraints and support fault-tolerance techniques. What is still missing is the ability to capitalize on these experiments while offering a one-step shopping point for further research, especially at the physical-design level of the circuit design tool-flow. Sharing metrics, tools, and exploration capabilities is the next challenge to the nano-computing community. We address this problem by proposing a model-driven physical-design toolkit based on the factorization of common domain-specific concepts and the reification of the tool-flow. We used this tool-flow to drive the design-space exploration in the context of a novel nanoscale architecture, and we showed that such an approach assures design convergence based on frequent quantitative evaluations, moreover it enables incremental evolution of the architecture and the automation flow
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Stonor, Andrew James. "The design of coinage metal and pnictogen architectures". Thesis, University of Bristol, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.683732.

Texto completo da fonte
Resumo:
A series of experiments were designed with the dual goals of, firstly, accessing new pnictogen ring and cage architectures suitable for use as antiwear additives in industrial lubricants and, secondly, to synthesise novel coinage metal complexes utilising both organic and inorganic ligands. Several asymmetrically and symmetrically substituted cyclodiphosphazanes were synthesised and in the majority of cases the cisoid or transoid isomerism was determined in solution by 31p NMR spectroscopy. Moreover, in three cases the molecular structures of cyclodiphosphazanes were examined using X-ray crystallography. Exo,exo-diamino- and amido-phosphorus sesquisulfide compounds were also characterised by 31p NMR spectroscopy which revealed phosphorus sesquisulfide itself as a decomposition product too. 31p NMR spectroscopy was also used to identify the presence of two new metalphosphorus sesquisulfide coordination species in which the inorganic cage acts as a ditopic ligand via coordination from phosphorus and a rare example of sulfur coordination. Selected cyclodiphosphazanes and phosphorus sulfides were also blended with industrial lubricant (group 1 base oil) and subject to antiwear and friction modifying tests. Furthermore, the structures of fifteen organometallic group 11 metal complexes using alkenes, dienes, trienes, arenes, alkynes and isonitriles as ligands were elucidated using X-ray crystallography, the parameters of which, were compared to analogous and similar examples reported in the literature. These compounds were further characterised by multinuclear NMR spectroscopy, IR spectroscopy and elemental analysis, the former of which provided evidence for the behaviour of complexes in solution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Sun, Luo. "Design and process variation analysis of SRAM architectures". Thesis, University of Bristol, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.684909.

Texto completo da fonte
Resumo:
Future memory subsystems will have to achieve high performance, low power, high reliability, small size and high robustness under process variation. Moreover, it should be possible to build them with reasonable yield. Thus, design methodologies are needed to enhance the behaviors and yield of these systems. Increasing process variation effects on the design metrics also need to be evaluated, such as performance and power consumption. This dissertation addresses these problems. First, it proposes a novel SRAM bitcell design based on a promising technology, carbon nanotube field-effect-transistor (CNTFET). This CNTFET-based SRAM design offers better stability, lower power dissipation and better process variation tolerant ability, compared with CMOS-based and other CNTFET-based SRAM bitcells. However, carbon nanotubes (CNTs) can be either semi-conductive or metallic during fabrication so that CNTFETs suffer from short circuit unreliability due to the metallic path between the source and drain. Therefore, the metallic CNT tolerant techniques are applied to the proposed SRAM design to improve the probabilities of functional SRAM cells. The structure of the CNTFET SRAM design with metallic CNT tolerance is evaluated and compared to the original CNTFET-based SRAM bitcell. The low power binary tree based SRAM architecture (LPSRAM) is then presented. This is a methodology for the future multi-gigabit SRAM designs so that they can obtain high performance, low power and high robustness at the expense of a reasonable area overhead. The analytical models are developed to evaluate the performance, power and the cost of this structure. Empirical simulations are used to verify the proposed LPSRAM analytical models. The results show that the maximum relative model error is within 8%. Moreover, future SRAMs designs need to be easily testable. LPSRAM shows great potential for testability. The testing algorithm and built-in-testing structure (BITS) are developed for the testable LPSRAM architecture. A reduction in testing time and power can be obtained by the proposed architecture. Performance of IC designs is becoming more sensitive to process variation as the technology continues to scale down to nanometer levels. The statistical blockade (SB) approach has recently been proposed to study the impact of process variation and to identify rare events, especially for high-replicated circuits, such as SRAMs. Nevertheless, the threshold of classification and the training sample size are key problems that will cause the imprecise yield estimation and more runtime. Two improved SB algorithms are proposed to address these issues. The experimental results are reported to show that a fast speed can be achieved with high accuracy. A novel variability evaluation approach is then developed based on the enhanced statistical blockade method. Only the tail part of the distribution is used to evaluate the design robustness under process variation, thus, saving time. Four SRAM cells in different logic styles are used to verify the effectiveness of the approach in the experiments. The results show that our method is faster than traditional estimation approaches. In summary, this dissertation reports on the advanced SRAM structures at both circuit level and architectural-level. A fast and accurate method to analyze yield and variability has been presented for the high replicated SRAMs under process variation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Layton, Kent D. "Low-voltage analog CMOS architectures and design methods /". Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2141.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Layton, Kent Downing. "Low-Voltage Analog CMOS Architectures and Design Methods". BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1218.

Texto completo da fonte
Resumo:
This dissertation develops design methods and architectures which allow analog circuits to operate at VT + 2Vds,sat, the minimum supply for CMOS circuits with all transistors in the active region where Vds,sat is the drain to source saturation voltage of a MOS transistor. Techniques which meet this criteria for rail-to-rail input stages, gain enhancement stages, and output stages are discussed and developed. These techniques are used to design four fully-differential rail-to-rail amplifiers. The highest gain is shown to be attained using a drain voltage equalization (DVE) or active-bootstrapping technique which produces more than 100dB of gain in a two stage amplifier with a bulk-driven input pair while showing no bandwidth degradation when compared to amplifier architectures with similar biasing. The low voltage design techniques are extended to switching and sampling circuits. A 10-bit digital to analog converter (DAC) and a 10-bit analog to digital converter (ADC) are designed and fabricated in a 0.35um dual-well CMOS process to prove the developed design methods, architectures, and techniques. The 10-bit DAC operates at 1MSPS with near rail-to-rail differential output operation with a 700mV supply voltage. This supply voltage, which is 150mV lower than the VT+2Vds,sat limit, is attained by using a bulk driven threshold voltage lowering technique. The ADC design is a fully-differential pipelined 10-bit converter that operates at 500kSPS with a full scale input range equal to the supply voltage and can operate at supply voltages as low as 650mV, 200mV below the VT + 2Vds,sat limit. The design methods and architectures can be used in advanced processes to maintain gain and minimize supply voltage. These designs show a minimum supply improvement over previously published designs and prove the efficacy of the design architectures and techniques presented in this dissertation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Baroncini, Massimo <1979&gt. "Design, Synthesis and Characterization of new Supramolecular Architectures". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2943/1/Baroncini_Massimo_Tesi.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Baroncini, Massimo <1979&gt. "Design, Synthesis and Characterization of new Supramolecular Architectures". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2943/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

BOLLO, MATTEO. "Architectures and Design Methodologies for Micro and Nanocomputing". Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2679368.

Texto completo da fonte
Resumo:
For many years, the transistors placement was not limited by interconnections thanks to the Digital Integrated Circuits market that is changing from a situation where CMOS technology was the reference (microelectronics era) to a plurality of emerging technologies (nanoelectronics era). The costs of optics photolithography needed to produce the recent CMOS technologies are increasing to such an extent as to make interesting the exploration of nanoelectronics alternative solutions. These technologies are called beyond CMOS technologies. Among the application fields, Information Security is one of the most pioneering rising market: several application areas need to ensure confidentiality and authenticity of data through cryptographic solutions. In some cases, cryptographic primitives can benefit a strong hardware acceleration. Unfortunately, CMOS based systems are vulnerable to ageing factors, such as Electromigration which decreases the reliability of certain security properties, and to Side-Channel attacks, where an attacker can retrieve confidential information by observing the power consumption. In this scenario, it is possible to speculate that emerging nanotechnologies can be exploited to cover the gaps left uncovered by CMOS technologies. A sub-class of circuits based on this novel techno- logical approach are called Dynamically-Coupled Systems (DCS). These novel technologies go beyond the transistor-interconnection dichotomy: elaboration, storage and transmission functions are all performed by the same device. DCS building blocks are called Processing Elements (PE). Interconnections are replaced with PE chains that are intrinsically pipelined. Ideally, it is possible to divide DCS technologies in two conventional sub-classes: Electrical-Coupled Technologies (ECT) where information propagation is due to electrons flow across ohmic paths and Field-Coupled Technologies (FCT). In FCT both the propagation and the elaboration of data depends on the electromagnetic field in- teractions (coupling) among PEs. An interesting possibility is offered by the use of single domain nanomagnets. Rectangular shaped magnets, with sizes smaller than 100nm, demonstrate a bistable behavior. This principle has been exploited to design digital logic circuits, leading to the so-called NanoMagnet Logic. The energy landscape of a single domain nanomagnet has two minimums corresponding to the magnetization vector aligned along the longer magnet side. If an ideal magnet is forced in the metastable state (i.e. the peak in the energy landscape), the probability that the magnet will reach one of the two stable states is exactly 0.5. The presented work proposes a plurality of development environments employable to study and design Dynamical-Coupled Nanocomputing digital circuits based on both a bottom-up approach for “fast prototyping” and a top-down one for complex circuits development environment. By means of the presented tools, it has been possible to study a series of Arithmetic and Cryptographic architectures, to perform circuit performance analysis and to highlight how the Modular Arithmetic offers a substantial contribution to the Field-Coupled Nanotechnologies interconnections overhead issue.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

COCCIA, ARIANNA. "Design of Wideband Architectures for Modern Communication Standards". Doctoral thesis, Università degli studi di Pavia, 2019. http://hdl.handle.net/11571/1243908.

Texto completo da fonte
Resumo:
Modern communication standards demand wideband solutions, with increasing channel bandwidths. At the same time, massive MIMO is quickly approaching, asking for low-cost and low-power architectures. This thesis is focused on the design of RF blocks, suitable for state-of-the-art applications. The work is divided in two parts. In the first one an innovative transmitter architecture based on a current-mode passive mixer topology with a closed-loop RF amplier is presented. The second part deals with the analysis and design of a broadband, inductorless noise-cancelling low-noise amplifier. Measurement (TX) and simulation (TX and LNTA) results are reported, to validate the proposed designs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Ramaratnam, Rajiv. "An analysis of service oriented architectures". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42372.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2007.
Leaf 96 blank.
Includes bibliographical references (leaves 88-95).
Introduction: Corporations all across the globe and of various sizes rely on their IT systems for business processes, reduction of process lifecycle and management of resources. These systems house several applications for internal sales, purchasing, finance, HR and so on. In any such typical organization, IT systems are a heterogeneous mix of hardware, operating systems and applications. Many of these applications run on different operating systems like Windows systems, Linux and Unix systems, etc. Oftentimes there is a need to consolidate data or access data from several such systems. The diversity among systems and applications makes these tasks difficult, time consuming and tedious. Furthermore, there is also a need for synchronization of data across systems and applications to ensure that the data is accurate and up-to-date. The heterogeneous nature of systems and applications lead to high levels of redundancy of data, making data maintenance a huge overhead for organizations. Today's organizations must also adapt to changes in environments both external and internal to them. Such changes could be changing market conditions, reorganization, change in business strategies within a company, addition or changing suppliers, partnerships, mergers and acquisitions and so on. There is also a growing need for integration across enterprise boundaries to facilitate fast and seamless collaboration between partners, customers and suppliers. All such needs entail changes to existing IT systems within an organization in a timely manner. There is thus a growing need for integration of such systems for consolidated decision making, accurate, up-to-date system information, better performance and system monitoring. IT Systems must also be flexible to respond to changes in the environments of their organization. Enterprise Application Integration is a process that aims to bring about such integration. The need for integration goes beyond the boundaries of an enterprise. Further, to successfully compete today, businesses need to be flexible. This means that their IT systems need to be able to keep pace with dynamic business conditions. It is evident that any solution for multiple IT systems to integrate with each other and to provide flexibility, they must be able to communicate and coordinate activities in a standard way. For almost two decades, many companies have tried to use CORBA, DCOM and similar technologies but have had little success. None of these technologies, for many reasons have become global technologies. The arrival of standards like HTTP and HTML helped linking together millions of humans across the internet but proved inadequate to link together computer systems. Moreover, internal and cross enterprise integration and coordination bring with them, security implications as both involve information exchange between organizational entities. As we will see later, the traditional methods of securing applications with firewalls prove inadequate for application security. One insight that has come from failed attempts to consolidate and coordinate IT systems is that such efforts cannot be limited to IT alone. Decision making on how interdepartmental and inter-enterprise data must data must be exchanged must be made by leaders and opinion shapers at each level or division of the organization. It is the goals of internal and external enterprise integration, flexibility of business processes, and enterprise data security that has led more and more organizations to adopt to Service Oriented Architectures (SOA). The adoption, implementation and running of a SOA does not simply involve IT department heads to design and create a new architecture for the enterprise. It involves a holistic understanding of the nature of the entire enterprise, its business and internal processes, the corporate strategy for the enterprise, an understanding of the business processes of the enterprise, its partners, suppliers, subsidiaries, etc. Such an undertaking is beyond the scope of a single department or division of the enterprise. The creation and running SOA architecture thus involves the coordination of all parts of the enterprise.
by Rajiv Ramaratnam.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Zheng, Ning. "Discovering interpretable topics in free-style text diagnostics, rare topics, and topic supervision /". Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199237529.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

de, Tenorio Cyril. "Methods for collaborative conceptual design of aircraft power architectures". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34818.

Texto completo da fonte
Resumo:
This thesis proposes an advanced architecting methodology. This methodology allows for the sizing and optimization of aircraft system architecture concepts and the establishment of subsystem development strategies. The process is implemented by an architecting team composed of subsystem experts and architects. The methodology organizes the architecture definition using the SysML language. Using meta-modeling techniques, this definition is translated into an analysis model which automatically integrates subsystem analyses in a fashion that represents the specific architecture concept described by the team. The resulting analysis automatically sizes the subsystems composing it, synthesizes their information to derive architecture-level performance and explores the architecture internal trade-offs. This process is facilitated using the Coordinated Optimization method proposed in this dissertation. This method proposes a multi-level optimization setup. An architecture-level optimizer orchestrates the subsystem sizing optimizations in order to optimize the aircraft as whole. The methodologies proposed in this thesis are tested and demonstrated on a proof of concept based on the exploration of turbo-electric propulsion aircraft concepts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Albarello, Nicolas. "Model-based trade studies in systems architectures design phases". Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2012. http://www.theses.fr/2012ECAP0052/document.

Texto completo da fonte
Resumo:
La conception d'architectures de systèmes est une tâche complexe qui implique des enjeux majeurs. Au cours de cette activité, les concepteurs du système doivent créer des alternatives de conception et doivent les comparer entre elles afin de sélectionner l'architecture la plus appropriée suivant un ensemble de critères. Dans le but d'étudier différentes alternatives, les concepteurs doivent généralement limiter leur étude comparative à une petite partie de l'espace de conception qui peut être composé d'un nombre immense de solutions. Traditionnellement, le processus de conception d'architecture est principalement dirigé par le jugement et l'expérience des concepteurs, et les alternatives sélectionnées sont des versions adaptées de solutions connues. Le risque est donc de sélectionner une solution pertinente mais sous-optimale. Pour gagner en confiance sur l'optimalité de la solution retenue, la couverture de l'espace de conception doit être augmentée. L'utilisation de méthodes de synthèse calculatoire d'architecture a prouvé qu'elle était un moyen efficace pour supporter les concepteurs dans la conception d'artefacts d'ingénierie (structures, circuits électriques...). Pour assister les concepteurs de systèmes durant le processus de conception d'architecture, une méthode calculatoire pour les systèmes complexes est définie. Cette méthode emploie une approche évolutionnaire (algorithmes génétiques) pour guider le processus d'exploration de l'espace de conception vers les zones optimales. La population initiale de l'algorithme génétique est créée grâce à une technique de synthèse calculatoire d'architecture qui permet de créer différentes architectures physiques et tables d'allocations pour une architecture fonctionnelle donnée. La méthode permet d'obtenir les solutions optimales du problème de conception posé. Ces solutions peuvent être ensuite utilisées par les concepteurs pour des études comparatives plus détaillées ou pour des négociations avec les fournisseurs de systèmes
The design of system architectures is a complex task which involves major stakes. During this activity, system designers must create design alternatives and compare them in order to select the most relevant system architecture given a set of criteria. In order to investigate different alternatives, designers must generally limit their trade studies to a small portion of the design-space which can be composed of a huge amount of solutions. Traditionally, the architecture design process is mainly driven by engineering judgment and designers' experiences and the selected alternatives are often adapted versions of known solutions. The risk is then to select a pertinent but yet under optimal solution. In order to increase the confidence in the optimality of the selected solution, the coverage of the design-space must be increased. The use of computational design synthesis methods proved to be an efficient way to support designers in the design of engineering artifacts (structures, electrical circuits...). In order to assist system designers during the architecture design process, a computational method for complex systems is defined. This method uses an evolutionary approach (genetic algorithms) to guide the design-space exploration process toward optimal zones. The initial population of the genetic algorithm is created thanks to a computational design synthesis technique which permits to create different physical architectures and allocation mappings for a given functional architecture. The method permits to obtain the optimal solutions of the stated design problem. These solutions can be then used by designers for more detailed trade studies or for technical negotiations with system suppliers
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Allen, James D. "System level design issues for high performance SIMD architectures". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15406.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Williams, Owen Ricardo. "Kinematics and design of robotic manipulators with complex architectures". Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61790.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Morrow, Joseph M. (Joseph Monroe). "Design and analysis of lunar lander control system architectures". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76167.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 153-157).
Although a great deal of separate work exists on the development of spacecraft actuators and control algorithm design, less work exists which examines the connections between the selection of specific actuator types and placements, how this affects control algorithm design, and how these combined factors affect the overall vehicle performance of a lunar lander. This thesis attempts to address these issues by combining a functionality-oriented approach to actuator type/placement with a controls-oriented approach to algorithm design and performance analysis. Three example control system architectures are examined for a generic autonomous 350kg lunar lander during the terminal descent flight phase. Results indicate that stability and control can be achieved using a wide variety of actuator types/placements and algorithms given that a set of 'common sense' subsystem functionality and robustness metrics are met; however, algorithm development was often heavily influenced/restricted by actuator system capabilities. It is therefore recommended that future designers of lunar lander vehicles consider the impact of their control system architectures from both a functionality-oriented and a controls-oriented approach to gain a more complete understanding of the effects of their choices on overall performance.
by Joseph M. Morrow.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Krieger, James David 1978. "Architectures and system design for digitally-enhanced antenna arrays". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/87925.

Texto completo da fonte
Resumo:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 143-147).
Digital techniques have had longstanding use in both the operational control and signal processing efforts associated with phased array antennas. Fundamentally, these techniques have served to provide additional levels of convenience and performance over the fully analog counterparts, without specifically addressing the underlying design of the analog hardware aspects of the arrays. The class of digitally-enhanced hardware has recently emerged, wherein "digitally aware" design approaches are used for the purpose of alleviating the high cost and complexity of sophisticated analog devices. Emergent trends in millimeter wave and low-terahertz circuit technology are enabling the prospect of physically small, yet electrically large antenna arrays for a host of exciting new communication, radar, and imaging applications. Still, the high cost of phased arrays remains a significant bottleneck to any widespread deployment in this regard. In light of this challenge, we propose two phased array architectures for which the notion of digital awareness plays a central role in their designs. The Dense Delta-Sigma Array: Primarily motivated by advancements in low-cost fabrication, this design concept provides the opportunity to replace the expensive RF components required to control the individual array element excitations with inexpensive phase shifter components having particularly coarse resolution (as few as 2-bits). This is made possible by increasing the number of array elements for a given aperture beyond the nominal number associated with the standard half-wavelength spacing. This approach is inspired by Delta-Sigma data converters, which employ faster-than-Nyquist sampling with low quantization resolution. The Sparse Multi-Coset Array: This design concept exploits the sparsity commonly found in typical environments to allow for target detection and imaging with significantly fewer array elements than prescribed by conventional half-wavelength spacing. The result is a structured periodic non-uniform array composed of a number of distinct subarrays known as cosets. This approach is inspired by multi-coset sampling, for which the average sampling rate may be reduced below the Nyquist convention when the spectral components within the overall bandwidth are limited to some number of sub-bands. In this approach, we view the underlying engineering design problem as one of compressive sensing. In this thesis, we develop and apply the underlying mathematical principles and concepts of the dense and sparse arrays, taking into account the practical constraints and issues that make the system design, analysis, and performance evaluation rich from an engineering perspective.
by James David Stone Krieger.
Ph. D.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Ludlow, James M. III. "Design and Synthesis of Terpyridine based Metallo-Supramolecular Architectures". University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1444989836.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Kurland, Nicholas. "Design of Engineered Biomaterial Architectures Through Natural Silk Proteins". VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/571.

Texto completo da fonte
Resumo:
Silk proteins have provided a source of unique and versatile building blocks in the fabrication of biomedical devices for addressing a range of applications. Critical to advancing this field is the ability to establish an understanding of these proteins in their native and engineered states as well as in developing scalable processing strategies, which can fully exploit or enhance the stability, structure, and functionality of the two constituent proteins, silk fibroin and sericin. The research outlined in this dissertation focuses on the evolution in architecture and capability of silks, to effectively position a functionally-diverse, renewable class of silk materials within the rapidly expanding field of smart biomaterials. Study of the process of building macroscopic silk fibers provides insight into the initial steps in the broader picture of silk assembly, yielding biomaterials with greatly improved attributes in the assembled state over those of protein precursors alone. Self-organization processes in silk proteins enable their aggregation into highly organized architectures through simple, physical association processes. In this work, a model is developed for the process of aqueous behavior and aggregation, and subsequent two-dimensional behavior of natural silk sericin, to enable formation of a range of distinct, complex architectures. This model is then translated to an engineered system of fibroin microparticles, demonstrating the role of similar phenomena in creating autonomously-organized structures, providing key insight into future “bottom up” assembly strategies. The aqueous behavior of the water-soluble silk sericin protein was then exploited to create biocomposites capable of enhanced response and biocompatibility, through a novel protein-template strategy. In this work, sericin was added to the biocompatible and biodegradable poly(amino acid), poly(aspartic acid), to improve its pH-dependent swelling response. This work demonstrated the production of a range of porous scaffolds capable providing meaningful response to environmental stimuli, with application in tissue engineering scaffolds and biosensing technologies. Finally, to expand the capabilities of silk proteins beyond process-driven parameters to directly fabricate engineered architectures, a method for silk photopatterning was explored, enabling the direct fabrication of biologically-relevant structures at the micro and nanoscales. Using a facile bioconjugation strategy, native silk proteins could be transformed into proteins with a photoactive capacity. The well-established platform of photolithography could then be incorporated into fabrication strategies to produce a range of architectures capable of addressing spatially-directed material requirements in cell culture and further applications in the use of non-toxic, renewable biological materials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Lahiri, Kanishka. "On-chip communication : system-level architectures and design methodologies /". Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3091346.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Ancajas, Dean Michael B. "Design of Reliable and Secure Network-On-Chip Architectures". DigitalCommons@USU, 2015. https://digitalcommons.usu.edu/etd/4150.

Texto completo da fonte
Resumo:
Network-on-Chips (NoCs) have become the standard communication platform for future massively parallel systems due to their performance, flexibility and scalability advantages. However, reliability issues brought about by scaling in the sub-20nm era threaten to undermine the benefits offered by NoCs. This dissertation demonstrates design techniques that address both reliability and security issues facing modern NoC architectures. The reliability and security problem is tackled at different abstraction levels using a series of schemes that combine information from the architecture-level as well as hardware-level in order to combat aging effects and meet secure design stipulations while maintaining modest power-performance overheads.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Ho, Phuong Minh. "Parallel architectures for solving combinatorial problems of logic design". PDXScholar, 1989. https://pdxscholar.library.pdx.edu/open_access_etds/3872.

Texto completo da fonte
Resumo:
This thesis presents a new, practical approach to solve various NP-hard combinatorial problems of logic synthesis, logic programming, graph theory and related areas. A problem to be solved is polynomially time reduced to one of several generic combinatorial problems which can be expressed in the form of the Generalized Propositional Formula (GPF) : a Boolean product of clauses, where each clause is a sum of products of negated or non-negated literals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wu, Ziyang. "Rational design of two-dimensional architectures for efficient electrocatalysis". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235888/1/ziyang%2Bwu%2Bthesis%284%29.pdf.

Texto completo da fonte
Resumo:
In this thesis, the principal focus is the rational design and fabrication of two-dimensional (2D) nanoarchitectures, e.g., low-cost metal oxide nanosheets and earth-abundant transition metal layered double hydroxides (LDHs) for enhanced electrocatalysis. The related hydrogen evolution reaction (HER) and oxygen evolution reaction (OER) performance not only demonstrated the advances of 2D nanomaterials, such as unique physical and mechanical properties, unprecedented electronic features, and ultrahigh surface areas but also indicated the possible mechanisms behind boosted activity and stability, e.g., phase engineering function and oxygen vacancies influence.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Walker, Simon N. "High speed algorithms and architectures for RSA encryption". Thesis, University of Sheffield, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298054.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

LaFon, Christian (Christian Phillip). "Context characterization for synthesis of process architectures". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/47871.

Texto completo da fonte
Resumo:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, 2008.
Includes bibliographical references (leaves 115-118).
Analysis steps are proposed as an aid for establishing Lean Product Development (LPD) activities in an organization. The proposal is offered as an aid to engineering managers and process designers for coping with the unique challenges of implementing processes from their inception - for example, at a new enterprise. As such, the thesis focuses on the creation of LPD, as opposed to traditional Lean improvement activities which benefit from the perspective of hindsight of a legacy process. Without established product development processes to improve upon, the implementation of product development activities at a new venture relies on the use of foresight to instance a LPD environment in new organizations. Therefore, the paper stresses stakeholder value delivery within the specific context that an enterprise operates and competes. A generic framework for context characterization is proposed and discussed. The framework is then evaluated for its usefulness in process design activities. The analysis steps are based on literature review and case study interviews. The proposed analysis steps include: * a comprehensive definition of the business context in which the enterprise operates and competes, * a statement of goals and objectives for the product development organization based on this context, and, * a determination of appropriate behaviors to meet these goals. Traditional Lean research has typically been approached from a large-scale, complex systems, for-profit perspective.
(cont.) Unique insights are gained from the perspective of small, privately funded, new ventures. The benefits include foresight-only value objectives for product development (process creation) and uniqueness of context (i.e. resource limited, mindshare-driven). The analysis method was validated by examining process design case studies within three contexts: large-scale aerospace, industrial process monitoring, and high-technology start-up.
by Christian LaFon.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Sekar, Krishna. "Dynamically configurable system-on-chip platforms architectures and design methodologies /". Diss., Connected to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3190004.

Texto completo da fonte
Resumo:
Thesis (Ph. D.)--University of California, San Diego, 2005.
Title from first page of PDF file (viewed March 8, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 151-165).
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Altinigneli, Muzaffer Can. "Pipelined Design Approach To Microprocessor Architectures A Partial Implementation: Mips". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606778/index.pdf.

Texto completo da fonte
Resumo:
This thesis demonstrate how pipelining in a RISC processor is achieved by implementing a subset of MIPS R2000 instructions on FPGA. Pipelining, which is one of the primary concepts to speed up a microprocessor is emphasized throughout this thesis. Pipelining is fundamentally invisible for high level programming language user and this work reveals the internals of microprocessor pipelining and the potential problems encountered while implementing pipelining. The comparative and quantitative flow of this thesis allows to understand why pipelining is preferred instead of other possible implementation schemes. The methodology for programmable logic development and the capabilities of programmable logic devices are also given as background information. This thesis can be the starting point and reference for programmers who are willing to get familiar with microprocessors and pipelining.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Narayanan, Sridhar. "Formal methods for reuse of design patterns and micro-architectures". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq26018.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Payan, Alexia Paule Marie-Renee. "Enabling methods for the design and optimization of detection architectures". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47688.

Texto completo da fonte
Resumo:
The surveillance of geographic borders and critical infrastructures using limited sensor capability has always been a challenging task in many homeland security applications. While geographic borders may be very long and may go through isolated areas, critical assets may be large and numerous and may be located in highly populated areas. As a result, it is virtually impossible to secure each and every mile of border around the country, and each and every critical infrastructure inside the country. Most often, a compromise must be made between the percentage of border or critical asset covered by surveillance systems and the induced cost. Although threats to homeland security can be conceived to take place in many forms, those regarding illegal penetration of the air, land, and maritime domains under the cover of day-to-day activities have been identified to be of particular interest. For instance, the proliferation of drug smuggling, illegal immigration, international organized crime, resource exploitation, and more recently, modern piracy, require the strengthening of land border and maritime awareness and increasingly complex and challenging national security environments. The complexity and challenges associated to the above mission and to the protection of the homeland may explain why a methodology enabling the design and optimization of distributed detection systems architectures, able to provide accurate scanning of the air, land, and maritime domains, in a specific geographic and climatic environment, is a capital concern for the defense and protection community. This thesis proposes a methodology aimed at addressing the aforementioned gaps and challenges. The methodology particularly reformulates the problem in clear terms so as to facilitate the subsequent modeling and simulation of potential operational scenarios. The needs and challenges involved in the proposed study are investigated and a detailed description of a multidisciplinary strategy for the design and optimization of detection architectures in terms of detection performance and cost is provided. This implies the creation of a framework for the modeling and simulation of notional scenarios, as well as the development of improved methods for accurate optimization of detection architectures. More precisely, the present thesis describes a new approach to determining detection architectures able to provide effective coverage of a given geographical environment at a minimum cost, by optimizing the appropriate number, types, and locations of surveillance and detection systems. The objective of the optimization is twofold. First, given the topography of the terrain under study, several promising locations are determined for each sensor system based on the percentage of terrain it is covering. Second, architectures of sensor systems able to effectively cover large percentages of the terrain at minimal costs are determined by optimizing the number, types and locations of each detection system in the architecture. To do so, a modified Genetic Algorithm and a modified Particle Swarm Optimization are investigated and their ability to provide consistent results is compared. Ultimately, the modified Particle Swarm Optimization algorithm is used to obtain a Pareto frontier of detection architectures able to satisfy varying customer preferences on coverage performance and related cost.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Banz, Christian [Verfasser]. "Design and Analysis of Architectures for Stereo Vision / Christian Banz". Aachen : Shaker, 2013. http://d-nb.info/1048457915/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia