Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: ELECTRONIC AND DESIGN AUTOMATION (EDA).

Дисертації з теми "ELECTRONIC AND DESIGN AUTOMATION (EDA)"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "ELECTRONIC AND DESIGN AUTOMATION (EDA)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Vila, Garcia Francesc. "From characterization strategies to PDK & EDA Tools for Printed Electronics." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/322813.

Повний текст джерела
Анотація:
Durant els últims anys, les tecnologies d’electrònica impresa (PE) estan atraient molta atenció, principalment degut a que es poden fabricar grans àrees, i són una alternativa de baix cost a la microelectrònica tradicional. D’entre totes les tecnologies disponibles, la fabricació emprant impresores d’injecció de tinta (inkjet) resulta particularment interessant, al ser un mètode d’impressió digital (reduint els residus generats al fabricar), i no tenir contacte amb el substrat (per tant permet la utilització de molts tipus diferents de substrats). La tecnologia inkjet encara està patint un gran desenvolupament, cosa que fa difícil que es puguin dissenyar circuits i sistemes sense tenir un gran coneixement sobre els processos que hi ha al darrere. A més a més, la mancança d’eines específicament dissenyades per a inkjet crea un gran distància entre els dissenyadors i els tecnòlecs responsables de desenvolupar la tecnologia, dificultant així una adopció generalitzada de la tecnologia inkjet. Aquesta tèsi contribueix a apropar els dissenyadors a la tecnologia, proposant i adaptant fluxes i kits de disseny existents i basats en microelectrònica, a les tecnologies inkjet, complementant-los amb eines específiques per adaptar-los a les peculiaritats de l’inkjet. D’aquesta manera aconseguim un camí directe des del disseny a la fabricació, abstraient els detalls tecnològics del disseny. A més a més, per tancar el camí entre disseny i la fabricació, aquesta tèsi proposa un entorn semi-automàtic de caracterització, que es fa servir per analitzar el comportament de la tinta dipositada, inferint quines correccions són necessàries per a què el resultat fabricat correspongui tant com sigui possible al disseny. El coneixement extret d’aquest pas s’incorporarà en una eina EDA específica que analitza i aplica automàticament les correccions extretes a un disseny.
During last years, Printed Electronics technologies have attracted a great deal of attention due to being a low-cost, large area electronics manufacturing process. From all available technologies, inkjet printing is of special interest, because of its digital nature, which reduces material waste; and being a non-contact process, which allows printing on a great variety of substrates. Inkjet printing is still on heavy development, thus making designing for it difficult without an in-depth knowledge of how the manufacturing process works. In addition, currently there is a lack of specific tools aiding to design for it, creating a large gap between designers and technology developers and difficulting a wide adoption of this particular technologies. The work presented on this thesis contributes to bridge the existing gap between designers and technology developers by proposing and adapting existing microelectronics-based design flows and kits, while complementing them with custom, PE specific Electronic Design Automation tools; to achieve a direct path from design to manufacturing, and abstract technology specific details from the design stages. This is achieved by combining a design flow with a PE Process/Physical Design Kit, and a set of EDA tools adapted to PE. In addition, to finally bridge design and manufacturing, this thesis proposes a semi-automated characterization methodology, used to analyze the deposited ink behavior, and infer all necessary corrections needed to ensure that the final fabricated result corresponds as much as possible to the intended design. This knowledge is then integrated into an specific EDA framework which will perform the aformentioned corrections automatically.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

GUPTA, NITIN. "MACHINE LEARNING PREDICTIVE ANALYTIC MODEL TO REDUCE COST OF QUALITY FOR SOFTWARE PRODUCTS." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18484.

Повний текст джерела
Анотація:
In today’s world, high quality product are need of the time. The low-quality product results in the high cost. This can be explained from the quality graph below 1) Prevention cost can be define as the issue/bugs found out before the deployment/delivered to customer. This cost is initially very low but in the longer run goes up 2) Failure cost includes cost of losing customers, Root cause analysis and rectification. This cost is defiantly very huge Figure 11 : Cost of Quality Source: https://www.researchgate.net/ 5 If there can be any mechanism that can help to identify the expected issues in the prevention cost then the overall all cost of quality can be reduce as shown in below graph Figure 12 : Modified Cost of Quality Source: https://www.researchgate.net/ Electronic and Design Automation (EDA) Industry is backbone of Semiconductor Industry as it provide software tool aiding in the development of Semi-Conductors chips. EDA tools are from specification to the foundry input. Below figure shows mapping of Chip design verification and currently available tools technologies Modified prevention cost Modified TCQ 6 Figure 13 : Tools offered by EDA Industry Sourced: https://en.wikipedia.org/wiki/Electronic_design_automation Term tape out means the chip out of foundry and ready for use in electronic circuit. Re- spin means incident post Tape-out chips does not function as required and re-build is required. Cost of the tape out is minimum 5 million of dollars. Major re-spin reason is functionality issues, therefore function verification tools delivered by EDA needs to be always of high quality. A major problem faced by the Functional verification tool R&D team is to predict the numbers of the bugs that might have been introduced during the design phase to sign off the completeness and quality. If these bugs can be predicted, then the COQ can be reduced. Hence saving million of dollar to company and customer. Machine learning, a upcoming new discipline, define scientific study of algorithm and using computing power develop prediction model so that certainty of the task can be managed. In this project, prediction model for expected bugs during the development of the software is designed to help the Product manager to get confidence on quality. For the data, explanatory research and Interview was conducted with-in the Synopsys. This project has been successfully adopted with-in the Verification IP group of EDA leader and is in process to get it implemented in all different Business Units.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Meister, Tilo. "Pinzuordnungs-Algorithmen zur Optimierung der Verdrahtbarkeit beim hierarchischen Layoutentwurf." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-96764.

Повний текст джерела
Анотація:
Sie entwickeln Entwurfssysteme für elektronische Baugruppen? Dann gehören für Sie die mit der Pinzuordnung verbundenen Optimierungskriterien - die Verdrahtbarkeit im Elektronikentwurf - zum Berufsalltag. Um die Verdrahtbarkeit unter verschiedenen Gesichtspunkten zu verbessern, werden in diesem Buch neu entwickelte Algorithmen vorgestellt. Sie ermöglichen erstmals die automatisierte Pinzuordnung für eine große Anzahl von Bauelementen in hochkomplexen Schaltungen. Alle Aspekte müssen in kürzester Zeit exakt erfasst, eingeschätzt und im Entwurfsprozess zu einem optimalen Ergebnis geführt werden. Die beschriebenen Methoden reduzieren den Entwicklungsaufwand für elektronische Systeme auf ein Minimum und ermöglichen intelligente Lösungen auf der Höhe der Zeit. Die vorliegende Arbeit behandelt die Optimierung der Pinzuordnung und die dafür notwendige Verdrahtbarkeitsvorhersage im hierarchischen Layoutentwurf. Dabei werden bekannte Methoden der Verdrahtbarkeitsvorhersage aus allen Schritten des Layoutentwurfs zusammengetragen, gegenübergestellt und auf ihre Eignung für die Pinzuordnung untersucht. Dies führt schließlich zur Entwicklung einer Vorhersagemethode, die speziell an die Anforderungen der Pinzuordnung angepasst ist. Die Pinzuordnung komplexer elektronischer Geräte ist bisher ein vorwiegend manueller Prozess. Es existieren also bereits Erfahrungen, welche jedoch weder formalisiert noch allgemein verfügbar sind. In den vorliegenden Untersuchungen werden Methoden der Pinzuordnung algorithmisch formuliert und damit einer Automatisierung zugeführt. Besondere Merkmale der Algorithmen sind ihre Einsetzbarkeit bereits während der Planung des Layouts, ihre Eignung für den hierarchisch gegliederten Layoutentwurf sowie ihre Fähigkeit, die Randbedingungen differenzieller Paare zu berücksichtigen. Die beiden untersuchten Aspekte der Pinzuordnung, Verdrahtbarkeitsvorhersage und Zuordnungsalgorithmen, werden schließlich zusammengeführt, indem die neue entwickelte Verdrahtbarkeitsbewertung zum Vergleichen und Auswählen der formulierten Zuordnungsalgorithmen zum Einsatz kommt
This work deals with the optimization of pin assignments for which an accurate routability prediction is a prerequisite. Therefore, this contribution introduces methods for routability prediction. The optimization of pin assignments, for which these methods are needed, is done after initial placement and before routing. Known methods of routability prediction are compiled, compared, and analyzed for their usability as part of the pin assignment step. These investigations lead to the development of a routability prediction method, which is adapted to the specific requirements of pin assignment. So far pin assignment of complex electronic devices has been a predominantly manual process. Hence, practical experience exists, yet, it had not been transferred to an algorithmic formulation. This contribution develops pin assignment methods in order to automate and improve pin assignment. Distinctive characteristics of the thereby developed algorithms are their usability during layout planning, their capability to integrate into a hierarchical design flow, and the consideration of differential pairs. Both aspects, routability prediction and assignment algorithms, are finally brought together by using the newly developed routability prediction to evaluate and select the assignment algorithms
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Knechtel, Johann. "Interconnect Planning for Physical Design of 3D Integrated Circuits." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-143635.

Повний текст джерела
Анотація:
Vertical stacking—based on modern manufacturing and integration technologies—of multiple 2D chips enables three-dimensional integrated circuits (3D ICs). This exploitation of the third dimension is generally accepted for aiming at higher packing densities, heterogeneous integration, shorter interconnects, reduced power consumption, increased data bandwidth, and realizing highly-parallel systems in one device. However, the commercial acceptance of 3D ICs is currently behind its expectations, mainly due to challenges regarding manufacturing and integration technologies as well as design automation. This work addresses three selected, practically relevant design challenges: (i) increasing the constrained reusability of proven, reliable 2D intellectual property blocks, (ii) planning different types of (comparatively large) through-silicon vias with focus on their impact on design quality, as well as (iii) structural planning of massively-parallel, 3D-IC-specific interconnect structures during 3D floorplanning. A key concept of this work is to account for interconnect structures and their properties during early design phases in order to support effective and high-quality 3D-IC-design flows. To tackle the above listed challenges, modular design-flow extensions and methodologies have been developed. Experimental investigations reveal the effectiveness and efficiency of the proposed techniques, and provide findings on 3D integration with particular focus on interconnect structures. We suggest consideration of these findings when formulating guidelines for successful 3D-IC design automation
Dreidimensional integrierte Schaltkreise (3D-ICs) beruhen auf neuartigen Herstellungs- und Integrationstechnologien, wobei vor allem “klassische” 2D-ICs vertikal zu einem neuartigen 3D-System gestapelt werden. Dieser Ansatz zur Erschließung der dritten Dimension im Schaltkreisentwurf ist nach Expertenmeinung dazu geeignet, höhere Integrationsdichten zu erreichen, heterogene Integration zu realisieren, kürzere Verdrahtungswege zu ermöglichen, Leistungsaufnahmen zu reduzieren, Datenübertragungsraten zu erhöhen, sowie hoch-parallele Systeme in einer Baugruppe umzusetzen. Aufgrund von technologischen und entwurfsmethodischen Schwierigkeiten bleibt jedoch bisher die kommerzielle Anwendung von 3D-ICs deutlich hinter den Erwartungen zurück. In dieser Arbeit werden drei ausgewählte, praktisch relevante Problemstellungen der Entwurfsautomatisierung von 3D-ICs bearbeitet: (i) die Verbesserung der (eingeschränkten) Wiederverwendbarkeit von zuverlässigen 2D-Intellectual-Property-Blöcken, (ii) die komplexe Planung von verschiedenartigen, verhältnismäßig großen Through-Silicion Vias unter Beachtung ihres Einflusses auf die Entwurfsqualität, und (iii) die strukturelle Einbindung von massiv-parallelen, 3D-IC-spezifischen Verbindungsstrukturen während der Floorplanning-Phase. Das Ziel dieser Arbeit besteht darin, Verbindungsstrukturen mit deren wesentlichen Eigenschaften bereits in den frühen Phasen des Entwurfsprozesses zu berücksichtigen. Dies begünstigt einen qualitativ hochwertigen Entwurf von 3D-ICs. Die in dieser Arbeit vorgestellten modularen Entwurfsprozess-Erweiterungen bzw. -Methodiken dienen zur effizienten Lösung der oben genannten Problemstellungen. Experimentelle Untersuchungen bestätigen die Wirksamkeit sowie die Effektivität der erarbeiten Methoden. Darüber hinaus liefern sie praktische Erkenntnisse bezüglich der Anwendung von 3D-ICs und der Planung deren Verbindungsstrukturen. Diese Erkenntnisse sind zur Ableitung von Richtlinien für den erfolgreichen Entwurf von 3D-ICs dienlich
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Nalbantis, Dimitris. "World Wide Web based layout synthesis for analogue modules." Thesis, University of Kent, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Greenwood, Rob. "Semantic analysis for system level design automation." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020216/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Han, Yiding. "Graphics Processing Unit-Based Computer-Aided Design Algorithms for Electronic Design Automation." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/3868.

Повний текст джерела
Анотація:
The electronic design automation (EDA) tools are a specific set of software that play important roles in modern integrated circuit (IC) design. These software automate the design processes of IC with various stages. Among these stages, two important EDA design tools are the focus of this research: floorplanning and global routing. Specifically, the goal of this study is to parallelize these two tools such that their execution time can be significantly shortened on modern multi-core and graphics processing unit (GPU) architectures. The GPU hardware is a massively parallel architecture, enabling thousands of independent threads to execute concurrently. Although a small set of EDA tools can benefit from using GPU to accelerate their speed, most algorithms in this field are designed with the single-core paradigm in mind. The floorplanning and global routing algorithms are among the latter, and difficult to render any speedup on the GPU due to their inherent sequential nature. This work parallelizes the floorplanning and global routing algorithm through a novel approach and results in significant speedups for both tools implemented on the GPU hardware. Specifically, with a complete overhaul of solution space and design space exploration, a GPU-based floorplanning algorithm is able to render 4-166X speedup, while achieving similar or improved solutions compared with the sequential algorithm. The GPU-based global routing algorithm is shown to achieve significant speedup against existing state-of-the-art routers, while delivering competitive solution quality. Importantly, this parallel model for global routing renders a stable solution that is independent from the level of parallelism. In summary, this research has shown that through a design paradigm overhaul, sequential algorithms can also benefit from the massively parallel architecture. The findings of this study have a positive impact on the efficiency and design quality of modern EDA design flow.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Farsaei, Ahmadreza. "On the electronic-photonic integrated circuit design automation : modelling, design, analysis, and simulation." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61272.

Повний текст джерела
Анотація:
Photonic networks form the backbone of the data communication infrastructure. In particular, in current and future wireless communication systems, photonic networks are becoming increasingly popular for data distribution between the central office and the remote antenna units at base stations. As wireless-photonic systems become increasingly more popular, not only low-cost implementation of such systems is desirable, but also a reliable electronic-photonic design automation (EPDA) framework supporting such complex circuits and systems is crucial. This work investigates the foundation and presents implementation of various aspects of such EPDA framework. Various building blocks of silicon-photonic systems are reviewed in the first chapter of the thesis. The review discusses an example of a 60-GHz wireless system based on photonic technology, which could be suitable for the emerging 5th-generation (5G) cellular networks, and also provides design use cases that need to be supported by the EPDA framework. Integrated photonic circuits, which are the building blocks of wireless-photonic systems, will achieve their potential only if designers can efficiently and reliably design, model, simulate, and tune the performance of electro-optical components. The developed EPDA framework supports an integrated optical solver, INTERCONNECT, to provide optical time and frequency domain simulations so that a designer would be able to simulate electrical, optical, and electro-optical circuits using two developed and implemented methodologies: sequential electro-optical simulation and co-simulation. We propose an algorithm to enhance the performance of electronic simulation engines that can be integrated into the EPDA simulation methods such as Harmonic Balance. It will be shown that body-biasing of CMOS transistors can be used as an effective method for tuning the performance of the electronic section of an electro-optical design. This can help designers adjusting the performance of their designs after fabrication. Modelling of electro-optical components is discussed in this thesis; It is shown that some traditional passive components such as inductors, which take a large amount of space in CMOS processes, could be fabricated in the much lower cost photonic process and consequently the overall cost of silicon-photonic systems can be reduced significantly.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tang, Dennis. "Evaluation of EDA tools for electronic development and a study of PLM for future development businesses." Thesis, Linköpings universitet, Fysik och elektroteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104011.

Повний текст джерела
Анотація:
Electronic Design Automation (EDA) tools are today very capable computer programs supporting electronic engineers with the design of printed circuit board (PCB). All tools have their strengths and weaknesses; when choosing the right tool many factors needs to be taken into consideration aside from the tools themselves. Companies need to focus on the product and revenues for a business to be viable. Depending on the knowledge and strengths of the company, the choice of tools varies. The decision should be based on the efficiency of the tools and the functions necessity for the company rather than the price tags. The quality and availability of support for the tools, training costs, how long will it take to put the tool in operation and present or future collaboration partners is equally important factors when deciding the right tool. The absence of experience and knowledge of the current tool within a company is a factor which could affect important operation; therefore it is important to provide training and education on how to use the tool to increase its efficiency. Providing training and education can be a large expense, but avoids changes within and makes the business competitive. The choice of EDA tool should be based on the employed engineer’s current knowledge and experience of the preferred tool. If the employed engineer’s knowledge and experience varies too much, it might be preferable to make a transition to one of the tool by training and education. Product lifecycle management (PLM) is a data management system and business activity management system which focuses on the lifecycle of a product. To manage the lifecycle of a product it is necessary to split the lifecycle into stages and phases for a more manageable and transparent workflow. By overseeing a product’s entire lifecycle there are benefits which affects many areas. PLM greatest benefits for EDA are collaboration across separate groups and companies by working together through a PLM platform, companies can forge strong design chains that combine their best capabilities to deliver the product to the customers. This report is a study on evaluating which EDA suits the company with consideration of the employed engineer’s demands, requests and competence. The interests in PLM made the company suggest a short theory study on PLM and EDA benefits.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Feng, Wenyuan. "Evolutionary design automation for control systems with practical constraints." Thesis, University of Glasgow, 2000. http://theses.gla.ac.uk/4507/.

Повний текст джерела
Анотація:
The aim of this work is to explore the potential and to enhance the capability of evolutionary computation in the development of novel and advanced methodologies that enable control system structural optimisation and design automation for practical applications. Current design and optimisation methods adopted in control systems engineering are in essence based upon conventional numerical techniques that require derivative information of performance indices. These techniques lack robustness in solving practical engineering problems, which are often of a multi-dimensional, multi-modal nature. Using those techniques can often achieve neither global nor structural optimisation. In contrast, evolutionary mechanism learning tools have the ability to search in a multi-dimensional, multi-modal space, but they can not approach a local optimum as a conventional calculus-based method. The first objective of this research is to develop a reliable and effective evolutionary algorithm for engineering applications. In this thesis, a globally optimal evolutionary methodology and environment for control system structuring and design automation is developed, which requires no design indices to be differentiable. This is based on the development of a hybridised GA search engine, whose local tuning is tremendously enhanced by the incorporation of Hill-Climbing (HC), Simulated Annealing (SA) and Simplex techniques to improve the performance in search and design. A Lamarckian inheritance technique is also developed to improve crossover and mutation operations in GAs. Benchmark tests have shown that the enhanced hybrid GA is accurate, and reliable. Based on this search engine and optimisation core, a linear and nonlinear control system design automation suite is developed in a Java based platform-independent format, which can be readily available for design and design collaboration over corporate Intranets and the Internet. Since it has also made cost function unnecessary to be differentiable, hybridised indices combining time and frequency domain measurement and accommodating practical constraints can now be incorporated in the design. Such type of novel indices are proposed in the thesis and incorporated in the design suite. The Proportional plus Integral plus Derivative (PID) controller is very popular in real world control applications. The development of new PID tuning rules remains an area of active research. Many researchers, such as Åström and Hägglund, Ho, Zhuang and Atherton, have suggested many methods. However, their methods still suffer from poor load disturbance rejection, poor stability or shutting of the derivative control etc. In this thesis, Systematic and batch optimisation of PID controllers to meet practical requirements is achieved using the developed design automation suite. A novel cost function is designed to take disturbance rejection, stability in terms of gain and phase margins and other specifications into account in-the same time. Comparisons made with Ho's method confirm that the derivative action can play an important role to improve load disturbance rejection yet maintaining the same stability margins. Comparisons made with Åström’s method confirm that the results from this thesis are superior not only in load disturbance rejection but also in terms of stability margins. Further robustness issues are addressed by extending the PID structure to a free form transfer function. This is realised by achieving design automation. Quantitative Feedback Theory (QFT), method offers a direct frequency-domain design technique for uncertain plants, which can deal non-conservatively with different types of uncertainty models and specifications. QFT design problems are often multi-modal and multi-dimensional, where loop shaping is .the most challenging part. Global solutions can hardly be obtained using analytical and convex or linear programming techniques. In addition, these types of conventional methods often impose unrealistic or unpractical assumptions and often lead to very conservative designs. In this thesis, GA-based automatic loop shaping for QFT controllers suggested by the Research Group is being furthered. A new index is developed for the design which can describe stability, load rejection and reduction of high frequency gains, which has not been achieved with existing methods. The corresponding prefilter can also be systematically designed if tracking is one of the specifications. The results from the evolutionary computing based design automation suite show that the evolutionary technique is much better than numerical methods and manual designs, i.e., 'high frequency gain' and controller order have been significantly reduced. Time domain simulations show that the designed QFT controller combined with the corresponding prefilter performs more satisfactorily.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Malayattil, Sarosh Aravind. "Design of a Multibus Data-Flow Processor Architecture." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/31379.

Повний текст джерела
Анотація:
General purpose microcontrollers have been used as computational elements in various spheres of technology. Because of the distinct requirements of specific application areas, however, general purpose microcontrollers are not always the best solution. There is a need for specialized processor architectures for specific application areas. This thesis discusses the design of such a specialized processor architecture targeted towards event driven sensor applications. This thesis presents an augmented multibus dataflow processor architecture and an automation framework suitable for executing a range of event driven applications in an energy efficient manner. The energy efficiency of the multibus processor architecture is demonstrated by comparing the energy usage of the architecture with that of a PIC12F675 microcontroller.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Johnson, Phillip. "Design and automation of MEDUSA (Materials and Electronic Device Universal System Analyzer)." Thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-03242009-040444/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Slezák, Josef. "Evoluční syntéza analogových elektronických obvodů s využitím algoritmů EDA." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233666.

Повний текст джерела
Анотація:
Disertační práce je zaměřena na návrh analogových elektronických obvodů pomocí algoritmů s pravěpodobnostními modely (algoritmy EDA). Prezentované metody jsou na základě požadovaných charakteristik cílových obvodů schopny navrhnout jak parametry použitých komponent tak také jejich topologii zapojení. Tři různé metody využití EDA algoritmů jsou navrženy a otestovány na příkladech skutečných problémů z oblasti analogových elektronických obvodů. První metoda je určena pro návrh pasivních analogových obvodů a využívá algoritmus UMDA pro návrh jak topologie zapojení tak také hodnot parametrů použitých komponent. Metoda je použita pro návrh admitanční sítě s požadovanou vstupní impedancí pro účely chaotického oscilátoru. Druhá metoda je také určena pro návrh pasivních analogových obvodů a využívá hybridní přístup - UMDA pro návrh topologie a metodu lokální optimalizace pro návrh parametrů komponent. Třetí metoda umožňuje návrh analogových obvodů obsahujících také tranzistory. Metoda využívá hybridní přístup - EDA algoritmus pro syntézu topologie a metoda lokální optimalizace pro určení parametrů použitých komponent. Informace o topologii je v jednotlivých jedincích populace vyjádřena pomocí grafů a hypergrafů.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Lakshmanan, Karthick. "Design of an Automation Framework for a Novel Data-Flow Processor Architecture." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/34193.

Повний текст джерела
Анотація:
Improved process technology has resulted in the integration of computing elements into multiple application areas. General purpose micro-controllers are designed to assist in this integration through a flexible design. The application areas, however, are so diverse in nature that the general purpose micro-controllers may not provide a suitable abstraction for all classes of applications. There is a need for specially designed architectures in application areas where the general purpose micro-controllers suffer from inefficiencies. This thesis focuses in the design of a processor architecture that provides a suitable design abstraction for a class of periodic, event-driven embedded applications such as sensor-monitoring systems. The design principles of the processor architecture are focused on the target application requirements, which are identified as event-driven nature with concurrent task execution and deterministic timing behavior. Additionally, to reduce the design complexity of applications on this novel architecture, an automation framework has been implemented. This thesis presents the design of the processor architecture and the automation framework explaining the suitability of the designed architecture for the target applications. The energy use of the novel architecture is compared with that of PIC12F675 micro-controller to demonstrate the energy-efficiency of the designed architecture.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Wrzyszcz, Artur. "Employing Petri nets in digital design : an area and power minimization perspective." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265361.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

SASSONE, ALESSANDRO. "Integration-aware Modeling, Simulation and Design Techniques for Smart Electronic Systems." Doctoral thesis, Politecnico di Torino, 2015. http://hdl.handle.net/11583/2597354.

Повний текст джерела
Анотація:
Smart electronic systems represent a vast category of energy-autonomous and ubiquitously connected systems that incorporate analog, digital and MEMS components, combined with various kinds of sensors, actuators, energy storage devices and power sources. Smart systems generally find applications in the worldwide market for “Monitoring & Control” products and solutions, hence they are used in a broad range of sectors, including automotive, healthcare, Internet of Things, ICT, safety and security, and aerospace. In order to support such wide variety of application scenarios, smart systems integrate a multitude of functionalities, technologies, and materials. The design of smart systems is therefore a complex and major multidisciplinary challenge, as it goes beyond the design of the individual components and subsystems. New design and simulation methodologies are fundamental for exploring the design space in order to find the most efficient trade-off between performance and involved resources, and for evaluating and validating system behavior taking into account the interactions between closely coupled components of different nature. Current system level design methods must indeed accurately manage increasing system complexity and interaction effects between the environment and the system and among the components. Nevertheless, the involved components are usually described using different languages, relying on different models of computation, and need to be jointly simulated at various abstraction levels. This dissertation aims at bridging this gap focusing on novel integration-aware solutions for different aspects of a smart system: the design of digital subsystems and components, the modeling of batteries, and the power estimation of smart systems at system level of design abstraction. Although the design flow of digital components is well consolidated and highly standardized (e.g., commercial, fully automated synthesis & optimization tools, technology libraries, etc.), additional integration-aware design constraints arise due to the interaction of components of different technological domains and to the harsh environment where smart systems typically operate. This work presents a methodology for addressing these new constraints, thus enhancing the design of digital components. As a partial fulfillment of such constraints results in a global degradation of performance, the proposed methodology focuses on the effects rather than the physical sources of the constraints. This allows to move from the typical RTL to a system level of abstraction, i.e., SystemC TLM, obtaining a faster validation of the performance of digital subsystems. Energy efficiency is becoming increasingly important for self-powered smart electronic systems, as the amount of energy they can gather from the environment or accumulate in storage devices cannot be considered constant over time. Power supplies have therefore a very heterogeneous nature: depending on the application, more than one type of power source (e.g., photovoltaic cells, thermoelectric or piezoelectric energy generators) and storage device (e.g., rechargeable and non-rechargeable batteries, supercapacitors, and fuel cells) could be hosted onto the system. As a matter of fact, no single power source could provide the desired level of energy density, power density, current, and voltage to the system for all possible workloads. Batteries are being significantly used in smart electronic systems due to the their increased energy capacity, improved production process, and lower cost over the last years. However, a battery is an electrochemical device that involves complicated chemical reactions resulting in many non-idealities of its behavior. Therefore, a smart system designer has to characterize these non-idealities in order to accurately model how the battery delivers power to the system. This dissertation introduces a systematic methodology for the automatic construction of battery models from datasheet information, thus avoiding costly and time-consuming measurements of battery characteristics. This methodology allows generating models for several battery charge and discharge characteristics with tunable accuracy according to the amount of the available manufacturers’ data, and without any limitation in battery chemistry, materials, form factor, and size. Finally, this work introduces a modeling and simulation framework for the system level estimation of power end energy flows in smart systems. Current simulationor model-based design approaches do not target a smart system as a whole, but rather single domains (digital, analog, power devices, etc.), and make use of proprietary tools and pre-characterized models having fixed abstraction level and fixed semantics. The proposed methodology uses principles borrowed from the system level functional simulation of digital systems and extends them for simulating the behavior of subsystems whose functionality is to generate, convert, or store energy (e.g., power sources, voltage regulators, energy storage devices, etc.). This has been done at system level using standard open-source tools such as SystemC AMS and IP-XACT, which allow to explicitly represent current and voltage similarly to digital logic signals. The implemented approach facilitates virtual prototyping, architecture exploration, and integration validation, with high flexibility and modularity.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Rykowski, Ronna Wynne. "Design of the IDO for the intelligent data object management system (IDOMS) project." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9948.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Motiwalla, Luvai Fazlehusen. "A knowledge-based electronic messaging system: Framework, design, prototype development, and validation." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184727.

Повний текст джерела
Анотація:
Although electronic messaging systems (EMS) are an attractive business communication medium several studies on the usage and impact of EMS have shown that despite the benefits, they have been generally used for routine and informal communication activities. Theoretically, EMS have yet to find their niche in organizational communications. Technically, EMS designs are not flexible to support communication activities of managers, are not maintainable to permit easy integration with other office applications and access to information from data/knowledge bases, and are not easily extendible beyond the scope of their initial design. Behaviorally, end users are not directly involved in the development of EMS. This dissertation attempts to bridge the transition of EMS technology from message processing systems to communication support systems. First, the dissertation provides an analysis for a knowledge-based messaging system (KMS) through a framework. The framework provides a theoretical basis to link management theory to EMS technology. It suggests that the communication needs of the managers vary depending on the activity level, implying related variations in EMS functionality. Second, the dissertation provides a design for the KMS through an architecture which incorporates the design and implementation issues such as, flexibility, maintainability, and extendibility. The superimposition of the KMS on an existing EMS provides flexibility, the loose coupling between the KMS-interface components and the KMS-functions increases its maintainability, and the strong functional decomposition and cohesion enhances the extendibility of the system beyond the scope of its initial design. Finally, the dissertation provides a implementation through the development of a prototype KMS which involves users into the design process through a validation study conducted at University of Arizona. The prototype used GDSS tools in eliciting message attributes for the personal knowledge base. This method proved effective in reducing the bottleneck observed in the acquisition of knowledge from multiple experts, simultaneously. Similarly, the combination of observation with interviews proved effective in eliciting the organizational knowledge base. The validation method measured the system's accuracy (which was very accurate) in prioritizing messages for the users.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Qi, Ji. "System-level design automation and optimisation of network-on-chips in terms of timing and energy." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/386210/.

Повний текст джерела
Анотація:
As system complexity constantly increases, traditional bus-based architectures are less adaptable to the increasing design demands. Specifically in on-chip digital system designs, Network-on-Chip (NoC) architectures are promising platforms that have distributed multi-core co-operation and inter-communication. Since the design cost and time cycles of NoC systems are growing rapidly with higher integration, systemlevel Design Automation (DA) techniques are used to abstract models at early design stages for functional validation and performance prediction. Yet precise abstractions and efficient simulations are critical challenges for modern DA techniques to improve the design efficiency. This thesis makes several contributions to address these challenges. We have firstly extended a backbone simulator, NIRGAM, to offer accurate system level models and performance estimates. A case study of developing a one-to-one transmission system using asynchronous FIFOs as buffers in both the NIRGAM simulator and a synthesised gate-level design is given to validate the model accuracy by comparing their power and timing performance. Then we have made a second contribution to improve DA techniques by proposing a novel method to efficiently emulate non-rectangular NoC topologies in NIRGAM and generating accurate energy and timing performance. Our proposed method uses time regulated models to emulate virtual non-rectangular topologies based on a regular Mesh. The performance accuracy of virtual topologies is validated by comparing with corresponding real NoC topologies. The third contribution of our research is a novel task-mapping scheme that generates optimal mappings to tile-based NoC networks with accurate performance prediction and increased execution speed. A novel Non-Linear Programming (NLP) based mapping problem has been formulated and solved by a modified Branch and Bound (BB) algorithm. The proposed method predicts the performance of optimised mappings and compares it with NIRGAM simulations for accuracy validation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Aluru, Gunasekhar. "Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849770/.

Повний текст джерела
Анотація:
The design of VLSI electronic circuits can be achieved at many different abstraction levels starting from system behavior to the most detailed, physical layout level. As the number of transistors in VLSI circuits is increasing, the complexity of the design is also increasing, and it is now beyond human ability to manage. Hence CAD (Computer Aided design) or EDA (Electronic Design Automation) tools are involved in the design. EDA or CAD tools automate the design, verification and testing of these VLSI circuits. In today’s market, there are many EDA tools available. However, they are very expensive and require high-performance platforms. One of the key challenges today is to select appropriate CAD or EDA tools which are open-source for academic purposes. This thesis provides a detailed examination of an open-source EDA tool called Electric VLSI Design system. An excellent and efficient CAD tool useful for students and teachers to implement ideas by modifying the source code, Electric fulfills these requirements. This thesis' primary objective is to explain the Electric software features and architecture and to provide various digital and analog designs that are implemented by this software for educational purposes. Since the choice of an EDA tool is based on the efficiency and functions that it can provide, this thesis explains all the analysis and synthesis tools that electric provides and how efficient they are. Hence, this thesis is of benefit for students and teachers that choose Electric as their open-source EDA tool for educational purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kobylinski, Krzysztof Rafal. "A new level of electronic design automation, design flow manager : a software tool implemented using the object-oriented development methodology." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/mq23367.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Mukherjee, Valmiki. "A Dual Dielectric Approach for Performance Aware Reduction of Gate Leakage in Combinational Circuits." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5255/.

Повний текст джерела
Анотація:
Design of systems in the low-end nanometer domain has introduced new dimensions in power consumption and dissipation in CMOS devices. With continued and aggressive scaling, using low thickness SiO2 for the transistor gates, gate leakage due to gate oxide direct tunneling current has emerged as the major component of leakage in the CMOS circuits. Therefore, providing a solution to the issue of gate oxide leakage has become one of the key concerns in achieving low power and high performance CMOS VLSI circuits. In this thesis, a new approach is proposed involving dual dielectric of dual thicknesses (DKDT) for the reducing both ON and OFF state gate leakage. It is claimed that the simultaneous utilization of SiON and SiO2 each with multiple thicknesses is a better approach for gate leakage reduction than the conventional usage of a single gate dielectric (SiO2), possibly with multiple thicknesses. An algorithm is developed for DKDT assignment that minimizes the overall leakage for a circuit without compromising with the performance. Extensive experiments were carried out on ISCAS'85 benchmarks using 45nm technology which showed that the proposed approach can reduce the leakage, as much as 98% (in an average 89.5%), without degrading the performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Frangieh, Tannous. "A Design Assembly Technique for FPGA Back-End Acceleration." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/29225.

Повний текст джерела
Анотація:
Long wait times constitute a bottleneck limiting the number of compilation runs performed in a day, thus risking to restrict Field-Programmable Gate Array (FPGA) adaptation in modern computing platforms. This work presents an FPGA development paradigm that exploits logic variance and hierarchy as a means to increase FPGA productivity. The practical tasks of logic partitioning, placement and routing are examined and a resulting assembly framework, Quick Flow (qFlow), is implemented. Experiments show up to 10x speed-ups using the proposed paradigm compared to vendor tool flows.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Fogaça, Mateus Paiva. "A new quadratic formulation for incremental timing-driven placement." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/164067.

Повний текст джерела
Анотація:
O tempo de propagação dos sinais nas interconexões é um fator dominante para atingir a frequência de operação desejada em circuitos nanoCMOS. Durante a síntese física, o posicionamento visa espalhar as células na área disponível enquanto otimiza uma função custo obedecendo aos requisitos do projeto. Portanto, o posicionamento é uma etapa chave na determinação do comprimento total dos fios e, consequentemente, na obtenção da frequência de operação desejada. Técnicas de posicionamento incremental visam melhorar a qualidade de uma dada solução. Neste trabalho, são propostas duas abordagens para o posicionamento incremental guiado à tempos de propagação através de suavização de caminhos e balanceamento de redes. Ao contrário dos trabalhos existentes na literatura, a formulação proposta inclui um modelo de atraso na função quadrática. Além disso, o posicionamento quadrático é aplicado incrementalmente através de uma operação, chamada de neutralização, que ajuda a manter as qualidades da solução inicial. Em ambas as técnicas, o comprimento quadrático de fios é ponderado pelo drive strength das células e a criticalidade dos pinos. Os resultados obtidos superam o estado-da-arte em média 9,4% e 7,6% com relação ao WNS e TNS, respectivamente.
The interconnection delay is a dominant factor for achieving timing closure in nanoCMOS circuits. During physical synthesis, placement aims to spread cells in the available area while optimizing an objective function w.r.t. the design constraints. Therefore, it is a key step to determine the total wirelength and hence to achieve timing closure. Incremental placement techniques aim to improve the quality of a given solution. Two quadratic approaches for incremental timing driven placement to mitigate late violations through path smoothing and net load balancing are proposed in this work. Unlike previous works, the proposed formulations include a delay model into the quadratic function. Quadratic placement is applied incrementally through an operation called neutralization which helps to keep the qualities of the initial placement solution. In both techniques, the quadratic wirelength is pondered by cell’s drive strengths and pin criticalities. The final results outperform the state-of-art by 9.4% and 7.6% on average for WNS and TNS, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Perodou, Arthur. "Frequency design of passive electronic filters : a modern system approach." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEC046.

Повний текст джерела
Анотація:
L’explosion actuelle du nombre d’appareils connectés (smartphones, drones, IoT, …) et du volume des données à transmettre engendre une croissance exponentielle du nombre de bandes radiofréquences. Toutes les solutions élaborées pour faire face à cette demande croissante, telle que l’agrégation de porteuses, impliquent de concevoir des filtres fréquentiels satisfaisant des contraintes (performance, consommation d’énergie, coût, …) toujours plus strictes. Les filtres passifs AW, pour acoustic wave (AW) en anglais, semblant être les seuls pouvant satisfaire ces contraintes. Cependant, face à l’augmentation drastique de la complexité de leur problème de conception, les méthodes traditionnelles de conception apparaissent limitées. Il devient donc nécessaire de développer de nouvelles méthodes, qui soient systématiques et efficaces d’un point de vue algorithmique. Le problème de synthèse des filtres AW est une instance du problème de synthèse de filtres électroniques passifs, intrinsèquement lié aux origines de la théorie des Systèmes linéaires et de l’Automatique. Des méthodes systématiques ont été développées pour des cas particuliers, tels que les filtres LC-échelle, mais n’incluent pas les filtres AW. Notre but est donc de les revisiter et de les généraliser en utilisant une approche systémique moderne, afin d’obtenir une méthodologie systématique et efficace de conception de filtres électroniques passifs, avec un intérêt particulier pour les filtres AW. Pour ce faire, le paradigme de l’optimisation convexe, et particulièrement la sous-classe nommée optimisation LMI, nous paraît être un candidat naturel. Doté de solveurs efficaces, il permet de résoudre un large éventail de problèmes d’ingénierie en un faible temps de calcul. Afin de relier notre problème de conception à cet environnement, il est proposé d’utiliser des outils modernes tels que la représentation LFT et la caractérisation mathématique dite de dissipativité. Historiquement, deux approches de conception se sont opposées. La première consiste à faire varier les valeurs caractéristiques des composants jusqu’à satisfaction du gabarit fréquentiel. Bien que flexible et proche de la formulation originelle du problème, cette approche aboutit typiquement à un problème d’optimisation complexe. Notre première contribution est d’avoir révélé les sources de cette complexité ainsi que de les avoir considérablement réduites, en introduisant une représentation originale qui résulte de la combinaison de l’outil LFT et du formalisme des Systèmes Hamiltoniens à Ports. Un algorithme résolvant séquentiellement des problèmes LMIs est proposé, possédant un taux de convergence raisonnable si le point initial est bien choisi. La seconde approche se compose de deux étapes. Une fonction de transfert est d’abord synthétisée de façon à satisfaire le gabarit fréquentiel. Cette étape correspond à un problème classique d’Automatique et de Traitement du Signal qui peut être efficacement résolu via l’optimisation LMI. Puis, cette fonction de transfert est réalisée comme un circuit avec une topologie donnée. Pour cela, elle doit satisfaire des conditions de réalisation. Ces dernières ne peuvent pas toutes être inclues dans la première étape, et nous formalisons certaines pratiques courantes pour en considérer le plus possible. Cela nous mène à résoudre le problème général de synthèse fréquentielle de filtres LFT. Notre seconde contribution est d’avoir fourni des méthodes de synthèse efficaces, à base d’optimisation LMI, pour résoudre certains sous-problèmes. Cela est accompli en généralisant la technique de la factorisation spectrale conjointement avec l’utilisation des extensions du Lemme KYP. Pour certains filtres électroniques passifs, comme les filtres LC-échelle passe-bande, la seconde approche permet de résoudre efficacement le problème de conception associé. Plus généralement, elle procure un point initial à la première approche, comme illustré dans le cas d’un filtre AW
The current explosion of communicating devices (smartphones, drones, IoT...), along with the ever-growing data to be transmitted, produces an exponential growth of the radiofrequency bands. All solutions devised to handle this increasing demand, such as carrier aggregation, require to synthesise frequency filters with stringent industrial requirements (performance, energy consumption, cost ...). While the technology of acoustic wave (AW) resonators, that seem to be the only passive micro-electronic components available to fulfil these requirements, is mature, the associate design problem becomes dramatically complex. Traditional design methods, based on the intuition of designers and the use of generic optimisation algorithms, appear very limited to face this complexity. Thus, systematic and efficient design methods need to be developed. The design problem of AW filters happens to be an instance of the more general design problem of passive electronic filters, that played an important role in the early development of Linear Control and System theory. Systematic design methods were developed in particular cases, such as for LC-ladder filters, but do not enable to tackle the case of AW filters. Our aim is then to revisit and generalise these methods using a modern System approach, in order to develop systematic and efficient design methods of passive electronic filters, with a special focus on AW filters. To achieve this, the paradigm of convex optimisation, and especially the sub-class of Linear Matrix Inequality (LMI) optimisation, appears for us a natural candidate. It is a powerful framework, endowed with efficient solvers, able to optimally solve a large variety of engineering problems in a low computational time. In order to link the design problem with this framework, it is proposed to use modern tools such as the Linear Fractional Transformation (LFT) representation and a mathematical characterisation coming from Dissipative System theory. Reviewing the different design methods, two design approaches stand out. The first approach consists in directly tuning the characteristic values of the components until the frequency requirements are satisfied. While very flexible and close to the original problem, this typically leads to a complex optimisation problem with important convergence issues. Our first main contribution is to make explicit the sources of this complexity and to significantly reduce it, by introducing an original representation resulting from the combination of the LFT and the Port-Hamiltonian Systems (PHS) formalism. A sequential algorithm based on LMI relaxations is then proposed, having a decent convergence rate when a suitable initial point is available. The second approach consists of two steps. First, a transfer function is synthesised such that it satisfies the frequency requirements. This step is a classical problem in Control and Signal Processing and can be efficiently solved using LMI optimisation. Second, this transfer function is realised as a passive circuit in a given topology. To this end, the transfer function needs to satisfy some conditions, namely realisation conditions. The issue is to get them with a convex formulation, in order to keep efficient algorithms. As this is generally not possible, an idea is to relax the problem by including common practices of designers. This leads to solve some instances of a general problem denoted as frequency LFT filter synthesis. Our second main contribution is to provide efficient synthesis methods, based on LMI optimisation, for solving these instances. This is achieved by especially generalising the spectral factorisation technique with extended versions of the so-called KYP Lemma. For particular electronic passive filters, such as bandpass LC-ladder filters, this second approach allows to efficiently solve the design problem. More generally, it provides an initial point to the first approach, as illustrated on the design of a particular AW filter
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Sewczwicz, Richard P. "Form definition language for intelligent data objects." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zia, Beenish. "Electronic Pillbox Logger for people with Parkinson's Disease." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/189.

Повний текст джерела
Анотація:
Parkinson' Disease (PD) is a motor disorder characterized by rigidity, tremor, and hypokinesia with secondary manifestations like defective posture and gait, mask like face and dementia. Over the years it may lead to inability to move, breath and ultimately patient may succumb to chest infection and embolism. Prevalence studies show that more than six million people around the world suffer from PD. At present, there is no cure for PD but there are effective treatments that can slow the progression of the disease and regulate its affects. PD results from a deficiency of dopamine so most drugs that produce a salutary effect in PD either potentiate dopamine or work as dopamine agonists. Hence, to keep the symptoms of PD to a minimum it is very important that the medications be consumed regularly, so that the dopamine level is maintained in the body of the subject. Electronic pillbox logger is a device that has been designed to ensure this very much required medication adherence in PD subjects, which can also be used to measure the response to oral medication. This work describes the design and implementation of an electronic pillbox logger for use by people suffering from Parkinson's disease (PD). The pillbox logger is designed to track medication adherence and prompt the user to take medication on time. It is pocket-sized, portable, and compartmented. It has a variety of alarm systems to remind the user to take the correct dose of their medication at the correct time. Most importantly, it keeps an electronic log of the time of dosage consumption by detecting the presence/absence of pills in the pillbox. This overcomes major limitations of other pillboxes with a logging function that are often too large to carry, contain a single compartment, or only record the time the container was opened rather than the presence or absence of pills. The proposed pillbox logger complements a wearable device under development for people with Parkinson's disease that continuously monitors impaired movement. The combination of the pillbox logger with the wearable sensor will permit clinicians to determine the response to oral therapies, which can be used to optimize therapy. People with PD consume similar pills throughout the day hence the pillbox logger has been designed to detect the presence/absence of pills in general in the pillbox rather than which specific pills are absent or present in the pillbox logger. This feature of the current design that the device records knowledge about pills in general in the pillbox logger and not about any specific pills is a major reason why the current design is specific to PD subjects only. However, though the current design of the pillbox logger is designed for people with Parkinson's Disease, the pillbox is suitable for other maladies in which the timing of the medication is critical. The described pillbox logger was built and the design was validated after running a number of tests. The battery powered pillbox logger is able to accurately store the information about the actual presence/absence of pills in each compartment of the pillbox. It is capable of sending out reminder alarms at the right time of the day and can be connected to a host computer using a USB cable to read the stored information from it. The proper functional working of the pillbox logger after thorough testing proves that the design of pillbox logger was successful.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Monu, Ruban. "Design and implementation of a basic laboratory information system for resource-limited settings." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34792.

Повний текст джерела
Анотація:
Basic Laboratory Information System (BLIS) is a joint initiative of C4G @ Georgia Tech, the Centers for Disease Control and Prevention (CDC) and Ministries of Health in several countries in Africa. The vast majority of health laboratories in Africa, engaged in routinely testing samples drawn from patients (for HIV, malaria etc.), have been using non-standardized paper logs and manual entries for keeping track of patients, test samples and results. Besides the obvious burden of tedious record-keeping, these methods increase the chances of errors due to transcription and mismatches, making it difficult to track patient history or view critical population-wide data. In 2008, PEPFAR (the United States President's Emergency Plan for AIDS Relief) together with the CDC was reauthorized with a $48 billion budget over five years to combat HIV/AIDS, tuberculosis, and malaria. The focus of PEPFAR has shifted from rapid scale-up to the quality and reliability of the clinical health programs and having an effective laboratory management system is one of its goals. C4G BLIS is a robust, customizable and easy-to-use system that keeps track of patients, samples, results, lab workflow and reports. It is meant to be an effective and sustainable enhancement to manual logs and paper-based approaches. The system is designed to work in resource-constrained laboratories with limited IT equipment and across sites with good, intermittent or no internet availability. With varied practices, workflow and terminology being followed across laboratories in various African countries, the system has been developed to enable each laboratory or country to customize and configure the system in a way that suits them best. We describe various aspects of BLIS including its flexible database schema design, configurable reports and language settings, end-user customizability and development model for rapid incorporation of user feedback. Through BLIS, we aim to demonstrate a sustainable ICT solution brought about by the early and constant involvement of the target laboratory staff and technicians, identifying their short- and long-term needs, and ensuring that the system can match these needs. We will present preliminary evaluation results from laboratories in Cameroon, Ghana, Tanzania and Uganda.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ström, Simon, and Ali Qhorbani. "Automation of the design process of printed circuit boards : Determining minimum distance required by auto-routing software." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251925.

Повний текст джерела
Анотація:
This thesis project aims to create an overview of new technologies in printed circuit board manufacturing which when automated could become part of an Industry 4.0 production flow. Potential design limits imposed by new technologies are then applied in the creation process of a minimum distance estimation function. The intended purpose of this function is to correctly estimate the minimum distance required for the auto-routing software FreeRouting to be able to successfully route between two components. This is achieved by using a brute-force attack to progressively decrease the distance between components using a bisectional approach to find the minimum distance at which the auto-routing software can still successfully route for a specific design. Using the results from this brute-force attack a couple of linear functions based on different base designs are created and then used to implement a minimum distance function. The minimum distance estimation function is then intended to be used to implement limits to how close components can be placed to each other in a printed circuit board design tool which purpose is to enable people with lesser knowledge of printed circuit boards to still be able to realize their design ideas.
Detta examensarbete ämnar skapa en överblick av nya tekniker inom mönsterkorts-tillverkning som när de automatiseras skulle kunna bli en del av ett Industri 4.0 produktionsflöde. Eventuella designbegränsningar som uppstår till följd av dessa tekniker kommer sedan appliceras i skapningsprocessen av en minsta avståndsfunktion. Syftet med denna funktion är att korrekt uppskatta det minimala avståndet som krävs för att auto-routing mjukvaran FreeRouting ska kunna dra ledningar mellan två komponenter. Detta görs genom en brute-force attackvinkel där avståndet mellan komponenter fortsätter minskas med bisektionsmetoden tills ett minsta avstånd hittats där auto-routing mjukvaran fortfarande kan dra ledningar för en specifik design. Genom användande av resultaten från denna brute-force attack skapas sedan ett par linjära funktioner baserade på olika bas-designer och dessa används sedan för att implementera minsta avståndsfunktionen. Denna minsta avståndet-funktion är sedan ämnad att implementeras som begränsningar för hur nära komponenter kan placeras varandra i ett program för design av mönsterkort vars syfte är att möjliggöra folk utan kunskaper inom mönsterkortsdesign att ändå kunna realisera sina designidéer.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Brusamarello, Lucas. "Modeling and simulation of device variability and reliability at the electrical level." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/65634.

Повний текст джерела
Анотація:
O efeito das variações intrínsecas afetando parâmetros elétricos de circuitos fabricados com tecnologia CMOS de escala nanométrica apresenta novos desafios para o yield de circuitos integrados. Este trabalho apresenta modelos para representar variações físicas que afetam transistores projetados em escala sub-micrônica e metodologias computacionalmente eficientes para simular estes dispositivos utilizando ferramentas de Electronic Design Automation (EDA). O trabalho apresenta uma investigação sobre o estado-da-arte de modelos para variabilidade em nível de simulação de transistor. Modelos de variações no processo de fabricação (RDF, LER, etc) e confiabilidade (NBTI, RTS, etc) são investigados e um novo modelo estatístico para a simulação de Random Telegraph Signal (RTS) e Bias Temperature Instability (BTI) para circuitos digitais é proposta. A partir desses modelos de dispositivo, o trabalho propõe modelos eficientes para analisar a propagação desses fenômenos para o nível de circuito através de simulação. As simulações focam no impacto de variabilidade em três diferentes aspectos do projeto de circuitos integrados digitais: caracterização de biblioteca de células, análise de violações de tempo de hold e células SRAM. Monte Carlo é a técnica mais conhecida e mais simples para simular o impacto da variabilidade para o nível elétrico do circuito. Este trabalho emprega Monte Carlo para a análise do skew em redes de distribuição do sinal de relógio e em caracterização de células SRAM considerando RTS. Contudo, simulações Monte Carlo exigem tempo de execução elevado. A fim de acelerar a análise do impacto de variabilidade em biblioteca de células este trabalho apresenta duas alternativas aMonte Carlo: 1) propagação de erros usando aproximação linear de primeira ordem e 2)Metodologia de Superfície de Resposta (RSM). As técnicas são validados usando circuitos de nível comercial, como a rede de clock de um chip comercial utilizando a tecnologia de 90nm e uma biblioteca de células usando um nó tecnológico de 32nm.
In nanometer scale complementary metal-oxide-semiconductor (CMOS) parameter variations pose a challenge for the design of high yield integrated circuits. This work presents models that were developed to represent physical variations affecting Deep- Submicron (DSM) transistors and computationally efficient methodologies for simulating these devices using Electronic Design Automation (EDA) tools. An investigation on the state-of-the-art of computer models and methodologies for simulating transistor variability is performed. Modeling of process variability and aging are investigated and a new statistical model for simulation of Random Telegraph Signal (RTS) in digital circuits is proposed. The work then focuses on methodologies for simulating these models at circuit level. The simulations focus on the impact of variability to three relevant aspects of digital integrated circuits design: library characterization, analysis of hold time violations and Static Random Access Memory (SRAM) cells. Monte Carlo is regarded as the "golden reference" technique to simulate the impact of process variability at the circuit level. This work employs Monte Carlo for the analysis of hold time and SRAM characterization. However Monte Carlo can be extremely time consuming. In order to speed-up variability analysis this work presents linear sensitivity analysis and Response Surface Methodology (RSM) for substitutingMonte Carlo simulations for library characterization. The techniques are validated using production level circuits, such as the clock network of a commercial chip using 90nm technology node and a cell library using a state-of-theart 32nm technology node.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Indrusiak, Leandro Soares. "A Framework supporting collaboration on the distributed design of integrated systems." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/6682.

Повний текст джерела
Анотація:
O trabalho de pesquisa apresentado nesta tese tem por objetivo apoiar o projeto distribuído de sistemas integrados, considerando especificamente a necessidade de interação colaborativa entre os projetistas. O trabalho enfatiza particularmente alguns problemas que foram considerados apenas marginalmente em abordagens anteriores, como a abstração da distribuição em rede dos recursos de automação de projeto, a possibilidade de interação síncrona e assíncrona entre projetistas e o suporte a modelos extensíveis de dados de projeto. Tais problemas requerem uma infra-estrutura de software significativamente complexa, pois possíveis soluções envolvem diversos módulos, desde interfaces com o usuário até bancos de dados e middleware. Para construir tal infra-estrutura, várias técnicas de engenharia foram empregadas e algumas soluções originais foram desenvolvidas. A idéia central da solução proposta é baseada no emprego conjunto de duas tecnologias homônimas: CAD Frameworks (ambientes integrados de apoio ao projeto) e frameworks orientados a objeto. O primeiro conceito foi criado no final da década de 80 na área de automação de projeto de sistemas eletrônicos e define uma arquitetura de software em níveis, voltada ao apoio a desenvolvedores de ferramentas de projeto, administradores de ambientes de projeto e projetistas. O segundo, desenvolvido na última década na área de engenharia de software, é um modelo para arquiteturas de software visando o desenvolvimento de sub-sistemas reusáveis de software orientado a objeto. No presente trabalho, propõe-se a criação de um framework orientado a objetos que inclui conjuntos extensíveis de primitivas de dados de projeto bem como de blocos para a construção de ferramentas de CAD. Esse framework orientado a objeto é agregado a um CAD Framework, onde ele passa a desempenhar funções tipicamente encontradas em tal ambiente, tais como representação e administração de dados de projeto, versionamento, interface com usuário, administração de projeto e integração de ferramentas. O CAD Framework implementado dentro do escopo desta tese foi chamado Cave2 e seguiu a clássica arquitetura em níveis apresentada por Barnes, Harrison, Newton e Spickelmier. Durante o projeto e a implementação do Cave2, uma série de avanços em relação as abordagens anteriores foi obtida com a exploração das vantagens advindas do uso de um framework orientado a objetos: - frameworks orientados a objetos são extensíveis por definição, então o mesmo pode ser dito a respeito das implementações dos conjuntos de primitivas de dados de projeto bem como de blocos para a construção de ferramentas de CAD. Isso implica que tanto o modelo de representação de projeto quanto os módulos de software processando tal modelo podem ser atualizados ou adaptados para uma metodologia de projeto específica, e que essas atualizações e adaptações ainda herdarão os aspectos arquiteturais e funcionais implementados nos elementos básicos do framework orientado a objetos; partes do framework orientado a objetos, mas em modelos claramente separados. Isso possibilita o uso de várias estratégias para a visualização de um conjunto de dados de 15 projeto, o que dá aos participantes de uma sessão de projeto colaborativo a flexibilidade de escolha individual de estratégia de visualização; - o controle de consistência entre semântica e visualização - uma questão particularmente importante em um ambiente de projeto onde coexistem múltiplas visualizações de cada projeto - também está incluído nas fundações do framework orientado a objetos implementado. Esse mecanismo é genérico o bastante para ser usado também pelas possíveis extensões do modelo de dados de projeto, uma vez que ele é baseado na inversão de controle entre a visualização e a semântica. A visualização recebe a intenção do usuário e propaga esse evento ao modelo da semântica, o qual avalia a possibilidade de uma mudança de estado. Se positivo, ele dispara a mudança de estado em ambos os modelos de visualização e semântica. A abordagem proposta nesta tese usa tal inversão de controle para incluir um nível adicional de processamento entre a semântica e a visualização, visando o controle de consistência nos casos de múltiplas visualizações; indisponibilidade de conexão entre elas; - o uso de objetos de proxy aumentou significativamente o nível de abstração da integração de recursos de automação de projeto, pois tanto ferramentas e serviços remotos quanto os instalados localmente são acessados através de chamadas de métodos em um objeto local. A conexão aos serviços e ferramentas remotos é obtida através de um protocolo de look-up, abstraíndo completamente a localização de tais recursos na rede e permitindo a adição e remoção em tempo de execução; - o CAD Framework foi implementato completamente usando a tecnologia Java, usando dessa forma a Java Virtual Machine como intermediário entre o sistema operacional e o CAD Framework, garantindo dessa forma a independência de plataforma. Todas as contribuições listadas anteriormente contribuiram com o aumento do nível de abstração da distribuição de recursos de automação de projeto e também apresentaram um novo paradigma para a interação remota entre projetistas. O CAD Framework no qual tais contribuições foram aplicadas é capaz de suportar colaboração de granularidade fina baseada em eventos, onde cada atualização feita por um projetista pode ser propagada para o restante da equipe, mesmo que estejam todos geograficamente distribuídos. Isto pode aumentar a sinergia de grupo entre os projetistas e permitir uma troca mais rica de experiências entre eles, aumentando significativamente o potencial de colaboração quando comparado com abordages baseadas em acesso a arquivos e registros propostas anteriormente. Três estudos de caso diferentes foram realizados para validar a abordagem proposta, cada um deles envolvendo um sub-conjunto das contribuições da presente tese. O primeiro utiliza a arquitetura de distribuição de recursos baseada em proxies para implementar uma plataforma de prototipação usando módulos de hardware reconfigurável. O segundo estende as fundações do framework orientado a objetos visando suportar projeto baseado em interfaces. Essas extensões - primitivas de representação de projeto e partes de ferramentas - são usadas na implementação de uma ferramenta chamada IBlaDe, que permite a criação colaborativa de modelos funcionais e estruturais de sistemas integrados. O terceiro estudo de caso aborda a possibilidade de integração de metadados multimídia ao modelo de dados de projeto. Essa possibilidade é explorada no contexto de uma plataforma online de educação e treinamento.
The work described in this thesis aims to support the distributed design of integrated systems and considers specifically the need for collaborative interaction among designers. Particular emphasis was given to issues which were only marginally considered in previous approaches, such as the abstraction of the distribution of design automation resources over the network, the possibility of both synchronous and asynchronous interaction among designers and the support for extensible design data models. Such issues demand a rather complex software infrastructure, as possible solutions must encompass a wide range of software modules: from user interfaces to middleware to databases. To build such structure, several engineering techniques were employed and some original solutions were devised. The core of the proposed solution is based in the joint application of two homonymic technologies: CAD Frameworks and object-oriented frameworks. The former concept was coined in the late 80's within the electronic design automation community and comprehends a layered software environment which aims to support CAD tool developers, CAD administrators/integrators and designers. The latter, developed during the last decade by the software engineering community, is a software architecture model to build extensible and reusable object-oriented software subsystems. In this work, we proposed to create an object-oriented framework which includes extensible sets of design data primitives and design tool building blocks. Such object-oriented framework is included within a CAD Framework, where it plays important roles on typical CAD Framework services such as design data representation and management, versioning, user interfaces, design management and tool integration. The implemented CAD Framework - named Cave2 - followed the classical layered architecture presented by Barnes, Harrison, Newton and Spickelmier, but the possibilities granted by the use of the object-oriented framework foundations allowed a series of improvements which were not available in previous approaches: - object-oriented frameworks are extensible by design, thus this should be also true regarding the implemented sets of design data primitives and design tool building blocks. This means that both the design representation model and the software modules dealing with it can be upgraded or adapted to a particular design methodology, and that such extensions and adaptations will still inherit the architectural and functional aspects implemented in the object-oriented framework foundation; - the design semantics and the design visualization are both part of the object-oriented framework, but in clearly separated models. This allows for different visualization strategies for a given design data set, which gives collaborating parties the flexibility to choose individual visualization settings; - the control of the consistency between semantics and visualization - a particularly important issue in a design environment with multiple views of a single design - is also included in the foundations of the object-oriented framework. Such mechanism is generic enough to be also used by further extensions of the design data model, as it is based on the inversion of control between view and semantics. The view receives the user input and propagates such event to the semantic model, which evaluates if a state change is possible. If positive, it triggers the change of state of both semantics and view. Our approach took advantage of such inversion of control and included an layer between semantics and view to take into account the possibility of multi-view consistency; - to optimize the consistency control mechanism between views and semantics, we propose an event-based approach that captures each discrete interaction of a designer with his/her respective design views. The information about each interaction is encapsulated inside an event object, which may be propagated to the design semantics - and thus to other possible views - according to the consistency policy which is being used. Furthermore, the use of event pools allows for a late synchronization between view and semantics in case of unavailability of a network connection between them; - the use of proxy objects raised significantly the abstraction of the integration of design automation resources, as either remote or local tools and services are accessed through method calls in a local object. The connection to remote tools and services using a look-up protocol also abstracted completely the network location of such resources, allowing for resource addition and removal during runtime; - the implemented CAD Framework is completely based on Java technology, so it relies on the Java Virtual Machine as the layer which grants the independence between the CAD Framework and the operating system. All such improvements contributed to a higher abstraction on the distribution of design automation resources and also introduced a new paradigm for the remote interaction between designers. The resulting CAD Framework is able to support fine-grained collaboration based on events, so every single design update performed by a designer can be propagated to the rest of the design team regardless of their location in the distributed environment. This can increase the group awareness and allow a richer transfer of experiences among them, improving significantly the collaboration potential when compared to previously proposed file-based or record-based approaches. Three different case studies were conducted to validate the proposed approach, each one focusing one a subset of the contributions of this thesis. The first one uses the proxy-based resource distribution architecture to implement a prototyping platform using reconfigurable hardware modules. The second one extends the foundations of the implemented object-oriented framework to support interface-based design. Such extensions - design representation primitives and tool blocks - are used to implement a design entry tool named IBlaDe, which allows the collaborative creation of functional and structural models of integrated systems. The third case study regards the possibility of integration of multimedia metadata to the design data model. Such possibility is explored in the frame of an online educational and training platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Zha, Wenwei. "Facilitating FPGA Reconfiguration through Low-level Manipulation." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/46787.

Повний текст джерела
Анотація:
The process of FPGA reconfiguration is to recompile a design and then update the FPGA configuration correspondingly. Traditionally, FPGA design compilation follows the way how hardware is compiled for achieving high performance, which requires a long computation time. How to efficiently compile a design becomes the bottleneck for FPGA reconfiguration. It is promising to apply some techniques or concepts from software to facilitate FPGA reconfiguration. This dissertation explores such an idea by utilizing three types of low-level manipulation on FPGA logic and routing resources, i.e. relocating, mapping/placing, and routing. It implements an FMA technique for "fast reconfiguration". The FMA makes use of the software compilation technique of reusing pre-compiled libraries for explicitly reducing FPGA compilation time. Based the software concept of Autonomic Computing, this dissertation proposes to build an Autonomous Adaptive System (AAS) to achieve "self-reconfiguration". An AAS absorbs the computing complexity into itself and compiles the desired change on its own. For routing, an FPGA router is developed. This router is able to route the MCNC benchmark circuits on five Xilinx devices within 0.35 ~ 49.05 seconds. Creating a routing-free sandbox with this router is 1.6 times faster than with OpenPR. The FMA uses relocating to load pre-compiled modules and uses routing to stitch the modules. It is an essential component of TFlow, which achieves 8 ~ 39 times speedup as compared to the traditional ISE flow on various test cases. The core part of an AAS is a lightweight embedded version of utilities for managing the system's hardware functionality. Two major utilities are mapping/placing and routing. This dissertation builds a proof-of-concept AAS with a universal UART transmitter. The system autonomously instantiates the circuit for generating the desired BAUD rate to adapt to the requirement of a remote UART receiver.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Ebadat, Afrooz. "Experiment Design for Closed-loop System Identification with Applications in Model Predictive Control and Occupancy Estimation." Doctoral thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209021.

Повний текст джерела
Анотація:
The objective of this thesis is to develop algorithms for application-oriented input design. This procedure takes the model application into account when designing experiments for system identification. This thesis is divided into two parts. The first part considers the theory of application-oriented input design, with special attention to Model Predictive Control (MPC). We start by studying how to find a convex approximation of the set of models that result in acceptable control performance using analytical methods when controllers with no closed-form control law, for e.g., MPC are employed. The application-oriented input design is formulated in time domain to enable handling of signals constraints. The framework is extended to closed-loop systems where two cases are considered i.e., when the plant is controlled by a general but known controller and for the case of MPC. To this end, an external stationary signal is designed via graph theory. Different sources of uncertainty in application-oriented input design are investigated and a robust application-oriented input design framework is proposed. The second part of this thesis is devoted to the problem of estimating the number of occupants based on the information available to HVAC systems in buildings. The occupancy estimation is first formulated as a two-tier problem. In the first tier, the room dynamic is identified using temporary measurements of occupancy. In the second tier, the identified model is employed to formulate the problem as a fused-lasso problem. The proposed method is further developed to be used as a multi-room estimator using a physics-based model. However, since it is not always possible to collect measurements of occupancy, we proceed by proposing a blind identification algorithm which estimates the room dynamic and occupancy, simultaneously. Finally, the application-oriented input design framework is employed to collect data that is informative enough for occupancy estimation purposes.

QC 20170620

Стилі APA, Harvard, Vancouver, ISO та ін.
34

CARUSO, Marco. "Computationally Efficient Innovative Techniques for the Design-Oriented Simulation of Free-Running and Driven Microwave Oscillators." Doctoral thesis, Università degli Studi di Palermo, 2014. http://hdl.handle.net/10447/90792.

Повний текст джерела
Анотація:
Analysis techniques for injection-locked oscillators/amplifiers (ILO) can be broadly divided into two classes. To the first class belong methods with a strong and rigorous theoretical basis, that can be applied to rather general circuits/systems but which are very cumbersome and/or time-consuming to apply. To the second class belong methods which are very simple and fast to apply, but either lack of validity/accuracy or are applicable only to very simple or particular cases. In this thesis, a novel method is proposed which aims at combining the rigorousness and broad applicability characterizing the first class of analysis techniques above cited with the simplicity and computational efficiency of the second class. The method relies in the combination of perturbation-refined techniques with a fundamental frequency system approach in the dynamical complex envelope domain. This permits to derive an approximate, but first-order exact, differential model of the phase-locked system useable for the steady-state, transient and stability analysis of ILOs belonging to the rather broad (and rigorously identified) class of nonlinear oscillators considered. The hybrid (analytical-numerical) nature of the formulation developed is suited for coping with all ILO design steps, from initial dimensioning (exploiting, e.g., the simplified semi-analytical expressions stemming from a low-level injection operation assumption) to accurate prediction (and fine-tuning, if required) of critical performances under high-injection signal operation. The proposed application examples, covering realistically modeled low- and high-order ILOs of both reflection and transmission type, illustrate the importance of having at one's disposal a simulation/design tool fully accounting for the deviation observed, appreciable for instance in the locking bandwidth of high-frequency circuits with respect to the simplified treatments usually applied, for a quick arrangement, in ILO design optimization procedures.
Analysis techniques for injection-locked oscillators/amplifiers (ILO) can be broadly divided into two classes. To the first class belong methods with a strong and rigorous theoretical basis, that can be applied to rather general circuits/systems but which are very cumbersome and/or time-consuming to apply. To the second class belong methods which are very simple and fast to apply, but either lack of validity/accuracy or are applicable only to very simple or particular cases. In this thesis, a novel method is proposed which aims at combining the rigorousness and broad applicability characterizing the first class of analysis techniques above cited with the simplicity and computational efficiency of the second class. The method relies in the combination of perturbation-refined techniques with a fundamental frequency system approach in the dynamical complex envelope domain. This permits to derive an approximate, but first-order exact, differential model of the phase-locked system useable for the steady-state, transient and stability analysis of ILOs belonging to the rather broad (and rigorously identified) class of nonlinear oscillators considered. The hybrid (analytical-numerical) nature of the formulation developed is suited for coping with all ILO design steps, from initial dimensioning (exploiting, e.g., the simplified semi-analytical expressions stemming from a low-level injection operation assumption) to accurate prediction (and fine-tuning, if required) of critical performances under high-injection signal operation. The proposed application examples, covering realistically modeled low- and high-order ILOs of both reflection and transmission type, illustrate the importance of having at one's disposal a simulation/design tool fully accounting for the deviation observed, appreciable for instance in the locking bandwidth of high-frequency circuits with respect to the simplified treatments usually applied, for a quick arrangement, in ILO design optimization procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Rangavajjula, Santosh Bharadwaj. "Design of information tree for support related queries: Axis Communications AB : An exploratory research study in debug suggestions with machine learning at Axis Communications, Lund." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16826.

Повний текст джерела
Анотація:
Context: In today's world, we have access to so much data than at any time in the past with more and more data coming from smartphones, sensors networks, and business processes. But, most of this data is meaningless, if it's not properly formatted and utilized. Traditionally, in service support teams, issues raised by customers are processed locally, made reports and sent over in the support line for resolution. The resolution of the issue then depends on the expertise of the technicians or developers and their experience in handling similar issues which limits the size, speed, and scale of the problems that can be resolved. One solution to this problem is to make relevant information tailored to the issue under investigation to be easily available. Objectives: The focus of the thesis is to improve turn around time of customer queries using recommendations and evaluate by defining metrics in comparison to existing workflow. As Artificial Intelligence applications can have a broad spectrum, we confine the scope with a relevance in software service and Issue Tracking Systems. Software support is a complicated process as it involves various stakeholders with conflicting interests. During the course of this literary work, we are primarily interested in evaluating different AI solutions specifically in the customer support space customize and compare them. Methods: The following thesis work has been carried out by making controlled experiments using different datasets and Machine learning models. Results: We classified Axis data and Bugzilla (eclipse) using Decision Trees, K Nearest Neighbors, Neural Networks, Naive Bayes and evaluated them using precision, recall rate, and F-score. K Nearest Neighbors was having precision 0.11, recall rate 0.11, Decision Trees had precision 0.11, recall rate 0.11, Neural Networks had precision 0.13, recall rate 0.11 and Naive Bayes had precision 0.05, recall rate 0.11. The result shows too many false positives and true negatives for being able to recommend. Conclusions: In this Thesis work, we have gone through 33 research articles and synthesized them. Existing systems in place and the current state of the art is described. A debug suggestion tool was developed in python with SKlearn. Experiments with different Machine Learning models are run on the tool and highest 0.13 (precision), 0.10 (f-score), 0.11 (recall) are observed with MLP Classification Neural Network.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Coelho, Ferreira Paulo Cesar. "Conception d'un service d'archivage multimedia dans un environnement bureautique ouvert." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb376040110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ochoa, Ruiz Gilberto. "A high-level methodology for automatically generating dynamically reconfigurable systems using IP-XACT and the UML MARTE profile." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00932118.

Повний текст джерела
Анотація:
The main contribution of this thesis consists on the proposition and development a Model-driven Engineering (MDE) framework, in tandem with a component-based approach, for facilitating the design and implementation of Dynamic Partially Reconfigurable (DPR) Systems-on-Chip. The proposed methodology has been constructed around the Metadata-based Composition Framework paradigm, and based on common standards such as UML MARTE and the IEEE IP-XACT standard, an XML representation used for storing metadata about the IPs to be reused and of the platforms to be obtained at high-levels of abstraction. In fact, a componentizing process enables us to reuse the IP blocks, in UML MARTE, by wrapping them with PLB (static IPs) and proprietary (DPR blocks) interfaces. This is attained by reflecting the associated IP metadata to IP-XACT descriptions, and then to UML MARTE templates (IP reuse). Subsequently, these IP templates are used for composing a DPR model that can be exploited to create a Xilinx Platform Studio FPGA-design, through model transformations. The IP reflection and system generation chains were developed using Sodius MDWorkbench, an MDE tool conceived for the creation and manipulation of models and their meta-models, as well as the definition and execution of the associated transformation rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ferrere, Thomas. "Assertions and measurements for mixed-signal simulation." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM050.

Повний текст джерела
Анотація:
Cette thèse porte sur le monitorage des simulations de circuits en signaux mixtes. Dans le domaine de la vérification de matériel, l'utilisation de formalismes déclaratifs pour la specification, dans le cadre de la validation par simulation, s'est installée dans la pratique courante. Cependant, le manque de fonctionnalités visant à spécifier les comportements asynchrones, ou l'intégration insuffisante des résultats de la vérification, rend les language d'assertions et de mesures inopérants pour la vérification de comportements en signaux mixtes. Nous proposons des outils théoriques et pratiques pour la description et le monitorage de ces comportements, qui comportent des aspects à la fois discrets et continus. Pour cela, nous nous appuyons sur des travaux antérieurs portant sur les extensions temps-réel de la logique temporelle et des expressions régulières. Nous décrivons de nouveaux algorithmes pour calculer la distance entre une trace de simulation et une propriété en logique temporelle données. Une nouvelle procédure de diagnostic est conçue pour déboguer efficacement de telles traces. Le monitorage des comportements continus est ensuite étendu à d'autres formes d'assertions basées sur des expressions régulières. Ces expressions constituent la base de notre language de description de mesures, qui permet de définir conjointement la mesure et les intervals temporels sur lesquels cette mesure doit être prise. Nous montrons comment d'autres mesures, déjà mises en œuvre dans les simulateurs analogiques peuvent être importées dans les descriptions digitales. Ceci permet d'étendre vers le domaine en signaux mixtes les approches hiérarchiques utilisées en vérification de circuits digitaux
This thesis is concerned with the monitoring of mixed-signal circuit simulations. In the field of hardware verification, the use of declarative property languages in combination with simulation is now standard practice. However the lack of features to specify asynchronous behaviors, or the insufficient integration of verification results, makes existing assertion and measurement languages unable to enforce mixed-signal requirements. We propose several theoretical and practical tools for the description and automatic monitoring of such behaviors, that feature both discrete and continuous aspects. For this we build on previous work on real-time extensions of temporal logic and regular expressions. We describe new algorithms to compute the distance from some simulation trace to temporal logic specifications, whose complexity is not higher than traditional monitoring. A novel diagnostic procedure is provided in order to efficiently debug such traces. The monitoring of continuous behaviors is then extended to other forms of assertions based on regular expressions. These expressions form the basis of our measurement language, that describes conjointly a measure and the patterns over which that measure should be taken. We show how other measurements implemented in analog circuits simulators can be ported to digital descriptions, this way extending structured verification approaches used for digital designs toward mixed-signal
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Повний текст джерела
Анотація:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kuo, Zong-Tai, and 郭宗泰. "A Study of Job Scheduling Mechanism for Electronic Design Automation (EDA) Software." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/86496268405689414892.

Повний текст джерела
Анотація:
碩士
中華大學
資訊工程學系碩士班
103
Abstract In IC design environment, the need for computation resources continually increases alongside with the advancement of wafer manufacturing processes. In order to improve the computational efficiency, the integration of hardware and software resources becomes inevitably important. Specifically, we need to integrate hardware resources like workstations, servers, and networks with software resources that include EDA tools and EDA licenses. To effectively integrate these resources, we developed a mechanism that can efficiently utilize these resources. In this thesis, we developed a Resource Integration System (RIS) based on Open Grid Scheduler (OGS). The RIS system we developed is capable of monitoring all system resources, integrating the resources of EDA licenses through backend processing, and provide automated task scheduler. To enhance the efficiency of task scheduler, we combined the Priority Scheduling (PS) algorithm and the Shortest-Job-First (SJF) algorithm to reduce task waiting time and reschedules, and thus enhance the effectiveness of the entire computing environment. To verify the efficiency of the task scheduling, we did experiments in real computational environment. From the collected data, the PS plus SJF algorithm reduce 26 ~ 41% on the average task waiting time, and 22 ~ 36% on the task rescheduling time, compared with the First-Come-First-Serve (FCFS) algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Mirsaeedi, Minoo. "EDA Solutions for Double Patterning Lithography." Thesis, 2012. http://hdl.handle.net/10012/6936.

Повний текст джерела
Анотація:
Expanding the optical lithography to 32-nm node and beyond is impossible using existing single exposure systems. As such, double patterning lithography (DPL) is the most promising option to generate the required lithography resolution, where the target layout is printed with two separate imaging processes. Among different DPL techniques litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) methods are the most popular ones, which apply two complete exposure lithography steps and an exposure lithography followed by a chemical imaging process, respectively. To realize double patterning lithography, patterns located within a sub-resolution distance should be assigned to either of the imaging sub-processes, so-called layout decomposition. To achieve the optimal design yield, layout decomposition problem should be solved with respect to characteristics and limitations of the applied DPL method. For example, although patterns can be split between the two sub-masks in the LELE method to generate conflict free masks, this pattern split is not favorable due to its sensitivity to lithography imperfections such as the overlay error. On the other hand, pattern split is forbidden in SADP method because it results in non-resolvable gap failures in the final image. In addition to the functional yield, layout decomposition affects parametric yield of the designs printed by double patterning. To deal with both functional and parametric challenges of DPL in dense and large layouts, EDA solutions for DPL are addressed in this thesis. To this end, we proposed a statistical method to determine the interconnect width and space for the LELE method under the effect of random overlay error. In addition to yield maximization and achieving near-optimal trade-off between different parametric requirements, the proposed method provides valuable insight about the trend of parametric and functional yields in future technology nodes. Next, we focused on self-aligned double patterning and proposed layout design and decomposition methods to provide SADP-compatible layouts and litho-friendly decomposed layouts. Precisely, a grid-based ILP formulation of SADP decomposition was proposed to avoid decomposition conflicts and improve overall printability of layout patterns. To overcome the limited applicability of this ILP-based method to fully-decomposable layouts, a partitioning-based method is also proposed which is faster than the grid-based ILP decomposition method too. Moreover, an A∗-based SADP-aware detailed routing method was proposed which performs detailed routing and layout decomposition simultaneously to avoid litho-limited layout configurations. The proposed router preserves the uniformity of pattern density between the two sub-masks of the SADP process. We finally extended our decomposition method for double patterning to triple patterning and formulated SATP decomposition by integer linear programming. In addition to conventional minimum width and spacing constraints, the proposed decomposition method minimizes the mandrel-trim co-defined edges and maximizes the layout features printed by structural spacers to achieve the minimum pattern distortion. This thesis is one of the very early researches that investigates the concept of litho-friendliness in SADP-aware layout design and decomposition. Provided by experimental results, the proposed methods advance prior state-of-the-art algorithms in various aspects. Precisely, the suggested SADP decomposition methods improve total length of sensitive trim edges, total EPE and overall printability of attempted designs. Additionally, our SADP-detailed routing method provides SADP-decomposable layouts in which trim patterns are highly robust to lithography imperfections. The experimental results for SATP decomposition show that total length of overlay-sensitive layout patterns, total EPE and overall printability of the attempted designs are also improved considerably by the proposed decomposition method. Additionally, the methods in this PhD thesis reveal several insights for the upcoming technology nodes which can be considered for improving the manufacturability of these nodes.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Distefano, Rosario. "Modeling and Simulation of Biological Systems through Electronic Design Automation techniques." Doctoral thesis, 2017. http://hdl.handle.net/11562/963108.

Повний текст джерела
Анотація:
Modeling and simulation of biological systems is a key requirement for integrating invitro and in-vivo experimental data. In-silico simulation allows testing different experimental conditions, thus helping in the discovery of the dynamics that regulate the system. These dynamics include errors in the cellular information processing that are responsible for diseases such as cancer, autoimmunity, and diabetes as well as drug effects to the system (Gonalves, 2013). In this context, modeling approaches can be classified into two categories: quantitative and qualitative models. Quantitative modeling allows for a natural representation of molecular and gene networks and provides the most precise prediction. Nevertheless, the lack of kinetic data (and of quantitative data in general) hampers its use for many situations (Le Novere, 2015). In contrast, qualitative models simplify the biological reality and are often able to reproduce the system behavior. They cannot describe actual concentration levels nor realistic time scales. As a consequence, they cannot be used to explain and predict the outcome of biological experiments that yield quantitative data. However, given a biological network consisting of input (e.g., receptors), intermediate, and output (e.g., transcription factors) signals, they allow studying the input-output relationships through discrete simulation (Samaga, 2013). Boolean models are gaining an increasing interest in reproducing dynamic behaviors, understanding processes, and predicting emerging properties of cellular signaling networks through in-silico experiments. They are emerging as a valid alternative to the quantitative approaches (i.e., based on ordinary differential equations) for exploratory modeling when little is known about reaction kinetics or equilibrium constants in the context of gene expression or signaling. Even though several approaches and software have been recently proposed for logic modeling of biological systems, they are limited to specific contexts and they lack of automation in analyzing biological properties such as complex attractors, and molecule vulnerability. This thesis proposes a platform based on Electronic Design Automation (EDA) technologies for qualitative modeling and simulation of Biological Systems. It aims at overtaking limitations that affect the most recent qualitative tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Γράσσος, Αθανάσιος. "Σχεδίαση τελεστικού ενισχυτή". Thesis, 2010. http://nemertes.lis.upatras.gr/jspui/handle/10889/3997.

Повний текст джерела
Анотація:
Στην παρούσα διπλωματική εργασία ασχοληθήκαμε με την μελέτη, ανάλυση και εξομοίωση του πιο διαδεδομένου αναλογικού κυκλώματος, του Τελεστικού Ενισχυτή. Αρχικά επιχειρήθηκε μια ανάλυση της επιμέρους δομής ενός Τ.Ε, ενώ παράλληλα γίνεται μια παρουσίαση κάποιων βασικών αναλογικών κυκλωμάτων που χρησιμοποιούνται στον σχεδιασμό του. Ακολούθως επεξηγούνται οι βασικές καθώς και οι προηγμένες επιδόσεις και τεχνικά χαρακτηριστικά του Τ.Ε και δίνονται παραδείγματα για την διασαφήνιση των φαινομένων που τις επηρεάζουν καθώς και των τεχνικών βελτίωσής τους. Σε όλες τις εξομοιώσεις χρησιμοποιήθηκαν EDA (Electronic design automation) tools και η όλη προσέγγιση γίνεται με την χρήση της CMOS τεχνολογίας. Τέλος, παρουσιάζονται οι κατευθύνσεις που τείνει να ακολουθεί σήμερα η τεχνολογία των Τ.Ε. καθώς και θέματα που απασχολούν ή και πρόκειται να απασχολήσουν και στο μέλλον τους σχεδιαστές.
In this Diploma Thesis, I studied, analyzed and simulated today’s most widely used analog circuit block, the Operational Amplifier. In the beginning an analysis of the basic OpAmp structure is presented and various analog circuits that are commonly used during the design process of an OpAmp are described. Then, basic as well as more advanced technical characteristics of the OpΑmp are explained and simulation results are presented to illustrate the phenomena and the parameters that affect the performance of the OpAmp. In simulations EDA (Electronic design automation) tools were used and the whole approach was made with the use of CMOS technology. Concluding, technology trends and issues that designers will face in the future are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Meister, Tilo. "Pinzuordnungs-Algorithmen zur Optimierung der Verdrahtbarkeit beim hierarchischen Layoutentwurf." Doctoral thesis, 2011. https://tud.qucosa.de/id/qucosa%3A26140.

Повний текст джерела
Анотація:
Sie entwickeln Entwurfssysteme für elektronische Baugruppen? Dann gehören für Sie die mit der Pinzuordnung verbundenen Optimierungskriterien - die Verdrahtbarkeit im Elektronikentwurf - zum Berufsalltag. Um die Verdrahtbarkeit unter verschiedenen Gesichtspunkten zu verbessern, werden in diesem Buch neu entwickelte Algorithmen vorgestellt. Sie ermöglichen erstmals die automatisierte Pinzuordnung für eine große Anzahl von Bauelementen in hochkomplexen Schaltungen. Alle Aspekte müssen in kürzester Zeit exakt erfasst, eingeschätzt und im Entwurfsprozess zu einem optimalen Ergebnis geführt werden. Die beschriebenen Methoden reduzieren den Entwicklungsaufwand für elektronische Systeme auf ein Minimum und ermöglichen intelligente Lösungen auf der Höhe der Zeit. Die vorliegende Arbeit behandelt die Optimierung der Pinzuordnung und die dafür notwendige Verdrahtbarkeitsvorhersage im hierarchischen Layoutentwurf. Dabei werden bekannte Methoden der Verdrahtbarkeitsvorhersage aus allen Schritten des Layoutentwurfs zusammengetragen, gegenübergestellt und auf ihre Eignung für die Pinzuordnung untersucht. Dies führt schließlich zur Entwicklung einer Vorhersagemethode, die speziell an die Anforderungen der Pinzuordnung angepasst ist. Die Pinzuordnung komplexer elektronischer Geräte ist bisher ein vorwiegend manueller Prozess. Es existieren also bereits Erfahrungen, welche jedoch weder formalisiert noch allgemein verfügbar sind. In den vorliegenden Untersuchungen werden Methoden der Pinzuordnung algorithmisch formuliert und damit einer Automatisierung zugeführt. Besondere Merkmale der Algorithmen sind ihre Einsetzbarkeit bereits während der Planung des Layouts, ihre Eignung für den hierarchisch gegliederten Layoutentwurf sowie ihre Fähigkeit, die Randbedingungen differenzieller Paare zu berücksichtigen. Die beiden untersuchten Aspekte der Pinzuordnung, Verdrahtbarkeitsvorhersage und Zuordnungsalgorithmen, werden schließlich zusammengeführt, indem die neue entwickelte Verdrahtbarkeitsbewertung zum Vergleichen und Auswählen der formulierten Zuordnungsalgorithmen zum Einsatz kommt.:1 Einleitung 1.1 Layoutentwurfsprozess elektronischer Baugruppen 1.2 Ziel der Arbeit 2 Grundlagen 2.1 Pinzuordnung 2.1.1 Definitionen 2.1.2 Freiheitsgrad 2.1.3 Komplexität und Problemgröße 2.1.4 Optimierungsziel 2.1.5 Randbedingungen 2.2 Reale Entwurfsbeispiele der Pinzuordnung 2.2.1 Hierarchieebenen eines Personal Computers 2.2.2 Multi-Chip-Module auf Hauptplatine 2.3 Einteilung von Algorithmen der Pinzuordnung 2.3.1 Klassifikation nach der Einordnung in den Layoutentwurf 2.3.2 Klassifikation nach Optimierungsverfahren 2.3.3 Zusammenfassung 2.4 Verdrahtbarkeitsvorhersage 2.4.1 Definitionen 2.4.2 Vorhersagegenauigkeit und zeitlicher Rechenaufwand 2.4.3 Methoden der Verdrahtbarkeitsvorhersage 3 Stand der Technik 3.1 Pinzuordnung 3.1.1 Einordnung in den Layoutentwurf 3.1.2 Optimierungsverfahren 3.2 Verdrahtbarkeitsvorhersage 3.2.1 Partitionierbarkeit 3.2.2 Verdrahtungslänge 3.2.3 Verdrahtungsweg 3.2.4 Verdrahtungsdichte 3.2.5 Verdrahtungsauslastung und Overflow 3.2.6 Manuelle optische Bewertung 3.2.7 Interpretation und Wichtung der Kriterien 4 Präzisierung der Aufgabenstellung 5 Pinzuordnungs-Algorithmen 5.1 Voraussetzungen 5.2 Topologische Heuristiken 5.2.1 Wiederholtes Unterteilen 5.2.2 Kreuzungen minimieren 5.2.3 Projizieren auf Gerade 5.3 Lineare Optimierung 5.4 Differenzielle Paare 5.5 Pinzuordnung in Hierarchieebenen 5.6 Nutzen der Globalverdrahtung 5.6.1 Methode 5.6.2 Layout der Ankerkomponenten 5.7 Zusammenfassung 6 Verdrahtbarkeitsbewertung während der Pinzuordnung 6.1 Anforderungen 6.2 Eignung bekannter Bewertungskriterien 6.2.1 Partitionierbarkeit / Komplexitätsanalyse 6.2.2 Verdrahtungslängen 6.2.3 Verdrahtungswege 6.2.4 Verdrahtungsdichte 6.2.5 Verdrahtungsauslastung 6.2.6 Overflow 6.2.7 Schlussfolgerung 6.3 Probabilistische Verdrahtungsdichtevorhersage 6.3.1 Grenzen probabilistischer Vorhersagen 6.3.2 Verdrahtungsumwege 6.3.3 Verdrahtungsdichteverteilung 6.3.4 Gesamtverdrahtungsdichte und Hierarchieebenen 6.4 Bewertung der Verdrahtungsdichteverteilung 6.4.1 Maßzahlen für die Verdrahtbarkeit eines Netzes 6.4.2 Maßzahlen für die Gesamtverdrahtbarkeit 6.5 Zusammenfassung 7 Pinzuordnungs-Bewertung 7.1 Anforderungen 7.2 Kostenterme 7.3 Normierung 7.3.1 Referenzwerte für Eigenschaften der Verdrahtungsdichte 7.3.2 Referenzwerte für Verdrahtungslängen 7.3.3 Referenzwerte für Signalkreuzungen 7.4 Gesamtbewertung der Verdrahtbarkeit 7.5 Priorisierung der Kostenterme 7.6 Zusammenfassung 8 Ergebnisse 8.1 Verdrahtbarkeitsbewertung 8.1.1 Charakteristik der ISPD-Globalverdrahtungswettbewerbe 8.1.2 Untersuchte probabilistische Schätzer 8.1.3 Kriterien zum Bewerten der Vorhersagegenauigkeit 8.1.4 Vorhersagegenauigkeit der probabilistischen Schätzer 8.2 Pinzuordnungs-Bewertung 8.2.1 Vollständige Analyse kleiner Pinzuordnungs-Aufgaben 8.2.2 Pinzuordnungs-Aufgaben realer Problemgröße 8.2.3 Differenzielle Paare 8.2.4 Nutzen der Globalverdrahtung 8.2.5 Hierarchieebenen 8.3 Zusammenfassung 9 Gesamtzusammenfassung und Ausblick Verzeichnisse Zeichen, Benennungen und Einheiten Abkürzungsverzeichnis Glossar Anhang A Struktogramme der Pinzuordnungs-Algorithmen A.1 Wiederholtes Unterteilen A.2 Kreuzungen minimieren A.3 Projizieren auf Gerade A.4 Lineare Optimierung A.5 Zufällige Pinzuordnung A.6 Differenzielle Paare A.7 Pinzuordnung in Hierarchieebenen A.8 Nutzen der Globalverdrahtung B Besonderheit der Manhattan-Länge während der Pinzuordnung C Weitere Ergebnisse C.1 Multipinnetz-Zerlegung C.1.1 Grundlagen C.1.2 In dieser Arbeit angewendete Multipinnetz-Zerlegung C.2 Genauigkeit der Verdrahtungsvorhersage C.3 Hierarchische Pinzuordnung Literaturverzeichnis
This work deals with the optimization of pin assignments for which an accurate routability prediction is a prerequisite. Therefore, this contribution introduces methods for routability prediction. The optimization of pin assignments, for which these methods are needed, is done after initial placement and before routing. Known methods of routability prediction are compiled, compared, and analyzed for their usability as part of the pin assignment step. These investigations lead to the development of a routability prediction method, which is adapted to the specific requirements of pin assignment. So far pin assignment of complex electronic devices has been a predominantly manual process. Hence, practical experience exists, yet, it had not been transferred to an algorithmic formulation. This contribution develops pin assignment methods in order to automate and improve pin assignment. Distinctive characteristics of the thereby developed algorithms are their usability during layout planning, their capability to integrate into a hierarchical design flow, and the consideration of differential pairs. Both aspects, routability prediction and assignment algorithms, are finally brought together by using the newly developed routability prediction to evaluate and select the assignment algorithms.:1 Einleitung 1.1 Layoutentwurfsprozess elektronischer Baugruppen 1.2 Ziel der Arbeit 2 Grundlagen 2.1 Pinzuordnung 2.1.1 Definitionen 2.1.2 Freiheitsgrad 2.1.3 Komplexität und Problemgröße 2.1.4 Optimierungsziel 2.1.5 Randbedingungen 2.2 Reale Entwurfsbeispiele der Pinzuordnung 2.2.1 Hierarchieebenen eines Personal Computers 2.2.2 Multi-Chip-Module auf Hauptplatine 2.3 Einteilung von Algorithmen der Pinzuordnung 2.3.1 Klassifikation nach der Einordnung in den Layoutentwurf 2.3.2 Klassifikation nach Optimierungsverfahren 2.3.3 Zusammenfassung 2.4 Verdrahtbarkeitsvorhersage 2.4.1 Definitionen 2.4.2 Vorhersagegenauigkeit und zeitlicher Rechenaufwand 2.4.3 Methoden der Verdrahtbarkeitsvorhersage 3 Stand der Technik 3.1 Pinzuordnung 3.1.1 Einordnung in den Layoutentwurf 3.1.2 Optimierungsverfahren 3.2 Verdrahtbarkeitsvorhersage 3.2.1 Partitionierbarkeit 3.2.2 Verdrahtungslänge 3.2.3 Verdrahtungsweg 3.2.4 Verdrahtungsdichte 3.2.5 Verdrahtungsauslastung und Overflow 3.2.6 Manuelle optische Bewertung 3.2.7 Interpretation und Wichtung der Kriterien 4 Präzisierung der Aufgabenstellung 5 Pinzuordnungs-Algorithmen 5.1 Voraussetzungen 5.2 Topologische Heuristiken 5.2.1 Wiederholtes Unterteilen 5.2.2 Kreuzungen minimieren 5.2.3 Projizieren auf Gerade 5.3 Lineare Optimierung 5.4 Differenzielle Paare 5.5 Pinzuordnung in Hierarchieebenen 5.6 Nutzen der Globalverdrahtung 5.6.1 Methode 5.6.2 Layout der Ankerkomponenten 5.7 Zusammenfassung 6 Verdrahtbarkeitsbewertung während der Pinzuordnung 6.1 Anforderungen 6.2 Eignung bekannter Bewertungskriterien 6.2.1 Partitionierbarkeit / Komplexitätsanalyse 6.2.2 Verdrahtungslängen 6.2.3 Verdrahtungswege 6.2.4 Verdrahtungsdichte 6.2.5 Verdrahtungsauslastung 6.2.6 Overflow 6.2.7 Schlussfolgerung 6.3 Probabilistische Verdrahtungsdichtevorhersage 6.3.1 Grenzen probabilistischer Vorhersagen 6.3.2 Verdrahtungsumwege 6.3.3 Verdrahtungsdichteverteilung 6.3.4 Gesamtverdrahtungsdichte und Hierarchieebenen 6.4 Bewertung der Verdrahtungsdichteverteilung 6.4.1 Maßzahlen für die Verdrahtbarkeit eines Netzes 6.4.2 Maßzahlen für die Gesamtverdrahtbarkeit 6.5 Zusammenfassung 7 Pinzuordnungs-Bewertung 7.1 Anforderungen 7.2 Kostenterme 7.3 Normierung 7.3.1 Referenzwerte für Eigenschaften der Verdrahtungsdichte 7.3.2 Referenzwerte für Verdrahtungslängen 7.3.3 Referenzwerte für Signalkreuzungen 7.4 Gesamtbewertung der Verdrahtbarkeit 7.5 Priorisierung der Kostenterme 7.6 Zusammenfassung 8 Ergebnisse 8.1 Verdrahtbarkeitsbewertung 8.1.1 Charakteristik der ISPD-Globalverdrahtungswettbewerbe 8.1.2 Untersuchte probabilistische Schätzer 8.1.3 Kriterien zum Bewerten der Vorhersagegenauigkeit 8.1.4 Vorhersagegenauigkeit der probabilistischen Schätzer 8.2 Pinzuordnungs-Bewertung 8.2.1 Vollständige Analyse kleiner Pinzuordnungs-Aufgaben 8.2.2 Pinzuordnungs-Aufgaben realer Problemgröße 8.2.3 Differenzielle Paare 8.2.4 Nutzen der Globalverdrahtung 8.2.5 Hierarchieebenen 8.3 Zusammenfassung 9 Gesamtzusammenfassung und Ausblick Verzeichnisse Zeichen, Benennungen und Einheiten Abkürzungsverzeichnis Glossar Anhang A Struktogramme der Pinzuordnungs-Algorithmen A.1 Wiederholtes Unterteilen A.2 Kreuzungen minimieren A.3 Projizieren auf Gerade A.4 Lineare Optimierung A.5 Zufällige Pinzuordnung A.6 Differenzielle Paare A.7 Pinzuordnung in Hierarchieebenen A.8 Nutzen der Globalverdrahtung B Besonderheit der Manhattan-Länge während der Pinzuordnung C Weitere Ergebnisse C.1 Multipinnetz-Zerlegung C.1.1 Grundlagen C.1.2 In dieser Arbeit angewendete Multipinnetz-Zerlegung C.2 Genauigkeit der Verdrahtungsvorhersage C.3 Hierarchische Pinzuordnung Literaturverzeichnis
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Machado, Luís Maria Travassos de Pinheiro Jorge. "Design Automation." Master's thesis, 2021. https://hdl.handle.net/10216/136029.

Повний текст джерела
Анотація:
A presente dissertação foca-se na automatização de um projeto iniciado manualmente pela EFACEC, na aplicação SCADE Display. Neste caso, na automatização de um troço de uma linha ferroviária, com toda a informação presente num railML (ficheiro-tipo xml). Todo este projeto foi realizado em Java, com apoio da ferramenta Eclipse.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Gulati, Kanupriya. "Hardware Acceleration of Electronic Design Automation Algorithms." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7471.

Повний текст джерела
Анотація:
With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microprocessors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the acceleration of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heavily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored. The architectural and performance tradeoffs of implementing the above applications on these alternative platforms (in comparison to their implementation on a single core microprocessor) are studied. In addition, this dissertation also presents an automated approach to accelerate uniprocessor code using a graphics processing unit (GPU). The key idea is to partition the software application into kernels in an automated fashion, such that multiple instances of these kernels, when executed in parallel on the GPU, can maximally benefit from the GPU?s hardware resources. The work presented in this dissertation demonstrates that several EDA algorithms can be successfully rearchitected to maximally harness their performance on alternative platforms such as custom designed ICs, FPGAs and graphic processors, and obtain speedups upto 800X. The approaches in this dissertation collectively aim to contribute towards enabling the computer aided design (CAD) community to accelerate EDA algorithms on arbitrary hardware platforms.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yu, Shao-Ming, and 余紹銘. "A Unified Optimization Framework for Electronic Design Automation." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/n3kjyf.

Повний текст джерела
Анотація:
博士
國立交通大學
資訊科學與工程研究所
96
In the modern microelectronics industry, there are some kinds of computer-aided design tools (CAD tools) to assist engineers complete simulation jobs which can verify and estimate the performance of their designs. However, to satisfy the design targets, engineers must base on the simulation result to adjust the design parameters, and again feed the adjusted parameters to retrieve the improved result. Currently, such routine work mostly performed by engineers with expertise. Therefore, a well defined optimization platform can assist engineers to solve problems more efficiently. This dissertation presents an object-oriented unified optimization framework (UOF) for general problem optimization. Based on biological inspired techniques, numerical deterministic methods, and C++ objective design, the UOF itself has significant potential to perform optimization operations on various problems. The UOF provides basic interfaces to define a general problem and generic solver, enabling these two different research fields to be bridged. The components in the UOF can be divided into problem and solver parts. These two parts work independently, allowing high-level code to be reused, and rapidly adapted to new problems and solvers. Without considering the mathematical convergence property, one hybrid intelligent technique for electronic design automation is also proposed and implemented in the UOF. In the proposed hybrid approach, an evolutionary method, such as genetic algorithm (GA), firstly searches the entire problem space to get a set of roughly estimated solutions. The numerical method, such as Levenberg-Marquardt (LM) method, then performs a local optima search and sets the local optima as the suggested values for the GA to perform further optimizations. The electronic design problems from the industry are very complicated and not always guaranteed to have an optimal solution. Therefore, the designers or engineers only need to find one suitable solution which can meet all specifications. By integrating with empirical knowledge, the proposed hybrid approach can automatically search solutions to match the specified targets in the electronic design problems. The purpose of the UOF is to assist the electronic design automation with various CAD tools. One application in 65nm CMOS device fabrication has been investigated. Integration of device and process simulation is implemented to evaluate device performances, where the developed approach enables us to extract optimal recipes which are subject to targeted device specification. Fluctuation of electrical characteristics is simultaneously considered and minimized in the optimization procedure. Compared with realistic fabricated and measured data, this approach can achieve the device characteristics, and can reduce the threshold voltage fluctuation at the same time. Other applications including device model parameter extraction, very large scale integration (VLSI) circuit design and the communication system antenna design are also implemented with the UOF and presented in this dissertation. The results confirm that UOF has excellent flexibility and extensibility to solve these problems successfully. The developed open-source project is available in the public domain (http://140.113.87.143/ymlab/uof/).
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lin, Ya-Ti, and 林雅迪. "The Key Success Factors of Taiwan's Electronic Design Automation Products." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/27029979464339876419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Liou, Hao-Wei, and 劉浩瑋. "Support Visual Debugging in Electronic Design Automation Software by xDIVA." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/00543049584940161378.

Повний текст джерела
Анотація:
碩士
國立中央大學
資訊工程研究所
100
Since IC (Integrated Circuit) becomes more and more complex, hardware engineers need more powerful computer-aided tools to help them develop IC. These tools are called Electronic design automation (EDA) tools, which are complicated software systems that deal with different problems in several stages in IC manufacturing. Our research provides a mechanism to help EDA Software programmers find debugs more easily. EDA tool programmers often deal with complex data structures, and these complex data structures make program very difficult to debug. Although debuggers are still the most important debugging tools for EDA tool programmers, they are quite limited in many aspects. More powerful visual debugging tools are needed in this area. In this thesis, we enhance xDIVA (eXtreme Debugging Information Visualization Assistant) to help EDA developers speed up the debugging process. xDIVA uses 3D graphs, color and animation to visualize debugging information. Developers can configure 3D debugging visualization by xDIVA to suit their needs. We show that xDIVA can map complex IC layout data structures into 3D polygons and problems of a program can be easily checked from such a visualization aid.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Huang, Hsin-Hsiung, and 黃信雄. "Study on Partition-Based Routing Problems in Electronic Design Automation." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/56426655745346347778.

Повний текст джерела
Анотація:
博士
中原大學
電子工程研究所
96
In this thesis, we study how to improve the routing results in the electronic design automation, EDA, by using the partition-based method. We observe the maximum source-to-terminal delay can be improved by the partition-based method, which takes the source position to divide the routing area into k sub-regions and individually constructs each routing tree for each sub-region. In the thesis, we will discuss the partition-based routing by listing the problems and giving the effective solutions. First, we study two Manhattan-architecture routing problems and propose two effective algorithms to solve them. For the first problem, we propose a timing-driven routing tree construction, which utilizes the partitioning to minimize the maximum source-to-terminal delay and adds the terminal-to-terminal edges in the spanning graph to minimize the total wirelength, makes a balance between the delay and wirelength. For the second problem, we propose a hybrid approach, which analyzes the density distribution by two density functions and partitions the routing area into a set of sub-regions, simultaneously minimizes the total wirelength and runtime. In contrast to the traditional method, our algorithm is more flexible to automatically apply the multiple approaches for each sub-region by two density functions. Second, we study two X-architecture routing problems and provide two effective algorithms to solve them. For the first problem, a partition-based method, which partitions the routing area into a set of sub-regions and applies the delaunay triangulation algorithm for each sub-region, is to minimize the maximum source-to-terminal delay and the total wirelength. For the second problem, we incorporate two novel concepts, including the virtual obstacles for handling the nonrectangular obstacles and the virtual nodes to minimize the total wirelength. Furthermore, we partition the routing area and apply the above new approach to construct the routing tree with rectangular/nonrectangular obstacles for each sub-region. In contrast to the previous works, our approach can handle the timing-driven routing with the obstacles.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії