Dissertations / Theses on the topic 'DIGITAL SYSTEM DESIGN TEST AND VERIFICATION'

To see the other types of publications on this topic, follow the link: DIGITAL SYSTEM DESIGN TEST AND VERIFICATION.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'DIGITAL SYSTEM DESIGN TEST AND VERIFICATION.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

VALLERO, ALESSANDRO. "Cross layer reliability estimation for digital systems." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2673865.

Full text
Abstract:
Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Jing 1959. "LOVERD--a logic design verification and diagnosis system via test generation." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/291686.

Full text
Abstract:
The development of cost-effective circuits is primarily a matter of economy. To achieve it, design errors and circuit flaws must be eliminated during the design process. To this end, considerable effort must be put into all phases of the design cycle. Effective CAD tools are essential for the production of high-performance digital systems. This thesis describes a CAD tool called LOVERD, which consists of ATPG, fault simulation, design verification and diagnosis. It uses test patterns, developed to detect single stuck-at faults in the gate-level implementation, to compare the results of the functional level description and its gate-level implementation. Whenever an error is detected, the logic diagnosis tool can be used to provide useful information to designers. It is shown that certain types of design errors in combinational logic circuits can be detected and allocated by LOVERD efficiently.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Seokjin. "High-speed analog-to-digital converters for modern satellite receivers design verification test and sensitivity analysis /." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/7864.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Bougan, Timothy B. "Flexible Intercom System Design for Telemetry Sites and Other Test Environments." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611449.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
Testing avionics and military equipment often requires extensive facilities and numerous operators working in concert. In many cases these facilities are mobile and can be set up at remote locations. In almost all situations the equipment is loud and makes communication between the operators difficult if not impossible. Furthermore, many sites must transmit, receive, relay, and record telemetry signals. To facilitate communication, most telemetry and test sites incorporate some form of intercom system. While intercom systems themselves are a not a new concept and are available in many forms, finding one that meets the requirements of the test community (at a reasonable cost) can be a significant challenge. Specifically, the test director must often communicate with several manned stations, aircraft, remote sites, and/or simultaneously record all or some of the audio traffic. Furthermore, it is often necessary to conference all or some of the channels (so that all those involved can fully follow the progress of the test). The needs can be so specialized that they often demand a very expensive "custom" solution. This paper describes the philosophy and design of a multi-channel intercom system specifically intended to support the needs of the telemetry and test community. It discusses in detail how to use state-of-the-art field programmable gate arrays, relatively inexpensive computers and digital signal processors, and some other new technologies to design a fully digital, completely non-blocking intercom system. The system described is radically different from conventional designs but is much more cost effective (thanks to recent developments in programmable logic, microprocessor performance, and serial/digital technologies). This paper presents, as an example, the conception and design of an actual system purchased by the US government.
APA, Harvard, Vancouver, ISO, and other styles
5

Ruddy, Marcus A. "Pico-Satellite Integrated System Level Test Program." DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/688.

Full text
Abstract:
Testing is an integral part of a satellite’s development, requirements verification and risk mitigation efforts. A robust test program serves to verify construction, integration and assembly workmanship, ensures component, subsystem and system level functionality and reduces risk of mission or capability loss on orbit. The objective of this thesis was to develop a detailed test program for pico-satellites with a focus on the Cal Poly CubeSat architecture. The test program established a testing baseline from which other programs or users could tailor to meet their needs. Inclusive of the test program was a detailed decomposition of discrete and derived test requirements compiled from the CubeSat and Launch Vehicle communities, military guidelines, and industry standards. The test requirements were integrated into a methodical, efficient and risk adverse test flow for verification.
APA, Harvard, Vancouver, ISO, and other styles
6

Aalto, Alve, and Ali Jafari. "Automatic Probing System for PCB : Analysis of an automatic probing system for design verification of printed circuit boards." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174865.

Full text
Abstract:
The purpose of this thesis is to conduct an analysis of whether the printed circuit boards from Ericsson can be tested using an automatic probing system or what changes in the design are required, to be a viable solution. The main instrument used for analyzing the printed circuit board was an oscilloscope. The oscilloscope was used to get the raw data for plotting the difference between the theoretical and actual signals. Connected to the oscilloscope was a 600A-AT probe from LeCroy. The programs used for interpreting the raw data extracted from the oscilloscope included Python, Matlab and Excel. For simulations on how an extra via in the signal path would affect the end results we used HFSS and ADS. The results were extracted into different Excel sheets to get an easier overview of the results. The results showed that the design of a board must almost become completely rebuilt for the changes, and it is therefore better to implement in a new circuit board rather than in an already existing one. Some of the components have to either be smaller or placed on one side of the board, where they cannot be in the way of the probe. The size of the board will become larger since the rules of via placements will be limited compared to before. The most time demanding part was the simulations of the extra via in the signal path, and the results showed that if a single-ended signal is below two gigahertz the placing of the via does not make a big difference, but if the signal has a higher frequency the placement is mostly dependent on the type of the signal. The optimal placement is generally around four millimeters away from the receiving end.
Målet med detta examensarbete är att göra en analys av huruvida Ericssons kretskort kan testas med hjälp av ett automatiskt probe system eller om det kräver stora förändringar i designdelen av kretskorten och om, vad för förändringar det i sådant fall kan vara. Till hjälp att analysera kretskorten har vi haft oscilloskop för att få ut rådata om skillnaderna mellan de teoretiska och verkliga signalerna. För att kunna tyda oscilloskopets samplade signaler har olika programmeringsspråk som Python, Matlab samt Excel använts. En extra via i signalens väg har även simulerats i HFSS och ADS med olika sorts probar för att se hur signalens beteende påverkas. Resultaten extraherades sedan in i olika Excel ark för att få en lätt överskådlig bild av resultaten. Resultatet vi fick visade att utformningen av ett kretskort med ändringarna skulle vara lättare att göra med en ny design istället för en redan existerande då större delar av kortet skulle behöva göras om. Vissa stora komponenter behöver antingen göras om, hitta mindre men likvärdiga eller sättas på ena sidan av kortet där de inte är i vägen för proben. Kretskorten som kommer använda flygande probesystem kommer antagligen bli lite större då viornas placering är mer begränsade än tidigare. Det mest tidskrävande arbetet var att simulera olika placeringar av en extra via i signalens väg. Detta visade att på en single ended signal under två gigahertz så gör det ingen större skillnad vart i signalens väg som den extra vian placeras. Då en högre frekvens används så är själva signalens karaktär det viktigaste än placeringen av en via, men om man inte vet den exakta karaktären så är fyra millimeter bort från mottagarens sida att rekommendera då närmare placering av viorna gör att signalerna börjar störa varandra.
APA, Harvard, Vancouver, ISO, and other styles
7

Ioannides, Charalambos. "Investigating the potential of machine learning techniques for feedback-based coverage-directed test genreation in simulation-based digital design verification." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.618315.

Full text
Abstract:
A consistent trend in the semiconductor industry has been the increase of embedded functionality in new designs. As a result, the verification process today requires significant resources to cope with these increasingly complex designs. In order to alleviate the problem, industrialists and academics have proposed and improved on many formal, simulation-based and hybrid verification techniques. To dale, none of the approaches proposed have been ab le to present a convincing argument warranting their unconditional adoption by the industry. In an attempt to further automate design verification (DV), especially in simulation-based and hybrid approaches, machine learning (ML) techniques have been exploited to close the loop between coverage feedback and test generation ; a process also known as coverage directed test generation (COG). Although most techniques in the literature are reported to help in constructing minimal tests that exercise most, if not the entire design under verification, a question remains on their practical usefulness when applied in real-world industry-level verification environments. The aim of this work was to answer the following questions: I. What would constitute a good ML-COG solution? What would be its characteristics? 2. 00 existing ML-CDG technique(s) scale to industrial designs and verification environments? 3. Can we develop an ML-based system that can attempt functional coverage balancing? This work answers these questions having gathered requirements and capabilities from earlier academic work and having filtered them through an industrial perspective on usefulness and practicality. The main metrics used to evaluate these were effectiveness in terms of coverage achieved and effort in terms of computation time. A coverage closure effective and easy to use genetic programming technique has been applied on an industrial level verification project and the poor results obtained show that the particular technique does not scale well. Linear regress ion has been attempted for feature extraction as part of a larger and novel stochastic ML-CDG model. The results on the capability of these techniques were again below expectations thus showing the ineffectiveness of these algorithms on larger datasets. Finally, learning classifier systems, specifically XCS, have been used to discover the cause-effect relationships between test generator biases and coverage. The results obtained pointed to a problem with the learning mechanism in XCS, and a misconception held by academics on its capabilities. Though XCS at its current state is not an immediately exploitable ML~CDG technique, it shows the necessary potential for later adoption once the problem discovered here is resolved through further research. The outcome of this research was the realisation that the contemporary ML methodologies that have been experimented with fall short of expectations when dealing with industry-level simulation-based digital design verification. In addition, it was discovered that design verification constitutes a problem area that can stress these techniques to their limits and can therefore indicate areas for further improvement and academic research.
APA, Harvard, Vancouver, ISO, and other styles
8

Aluru, Gunasekhar. "Exploring Analog and Digital Design Using the Open-Source Electric VLSI Design System." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849770/.

Full text
Abstract:
The design of VLSI electronic circuits can be achieved at many different abstraction levels starting from system behavior to the most detailed, physical layout level. As the number of transistors in VLSI circuits is increasing, the complexity of the design is also increasing, and it is now beyond human ability to manage. Hence CAD (Computer Aided design) or EDA (Electronic Design Automation) tools are involved in the design. EDA or CAD tools automate the design, verification and testing of these VLSI circuits. In today’s market, there are many EDA tools available. However, they are very expensive and require high-performance platforms. One of the key challenges today is to select appropriate CAD or EDA tools which are open-source for academic purposes. This thesis provides a detailed examination of an open-source EDA tool called Electric VLSI Design system. An excellent and efficient CAD tool useful for students and teachers to implement ideas by modifying the source code, Electric fulfills these requirements. This thesis' primary objective is to explain the Electric software features and architecture and to provide various digital and analog designs that are implemented by this software for educational purposes. Since the choice of an EDA tool is based on the efficiency and functions that it can provide, this thesis explains all the analysis and synthesis tools that electric provides and how efficient they are. Hence, this thesis is of benefit for students and teachers that choose Electric as their open-source EDA tool for educational purposes.
APA, Harvard, Vancouver, ISO, and other styles
9

Qiang, Qiang. "FORMAL a sequential ATPG-based bounded model checking system for VLSI circuits /." online version, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=case1144614543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Larsson, Erik. "An Integrated System-Level Design for Testability Methodology." Doctoral thesis, Linköpings universitet, ESLAB - Laboratoriet för inbyggda system, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4932.

Full text
Abstract:
HARDWARE TESTING is commonly used to check whether faults exist in a digital system. Much research has been devoted to the development of advanced hardware testing techniques and methods to support design for testability (DFT). However, most existing DFT methods deal only with testability issues at low abstraction levels, while new modelling and design techniques have been developed for design at high abstraction levels due to the increasing complexity of digital systems. The main objective of this thesis is to address test problems faced by the designer at the system level. Considering the testability issues at early design stages can reduce the test problems at lower abstraction levels and lead to the reduction of the total test cost. The objective is achieved by developing several new methods to help the designers to analyze the testability and improve it as well as to perform test scheduling and test access mechanism design. The developed methods have been integrated into a systematic methodology for the testing of system-on-chip. The methodology consists of several efficient techniques to support test scheduling, test access mechanism design, test set selection, test parallelization and test resource placement. An optimization strategy has also been developed which minimizes test application time and test access mechanism cost, while considering constraints on tests, power consumption and test resources. Several novel approaches to analyzing the testability of a system at behavioral level and register-transfer level have also been developed. Based on the analysis results, difficult-to-test parts of a design are identified and modified by transformations to improve testability of the whole system. Extensive experiments, based on benchmark examples and industrial designs, have been carried out to demonstrate the usefulness and efficiency of the proposed methodology and techniques. The experimental results show clearly the advantages of considering testability in the early design stages at the system level.
APA, Harvard, Vancouver, ISO, and other styles
11

Norberg, Johan. "Verification techniques in the context of event-trigged soft real-time systems." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-737.

Full text
Abstract:

When exploring a verification approach for Komatsu Forest's control system regarding their forest machines (Valmet), the context of soft real-time systems is illuminated. Because of the nature of such context, the verification process is based on empirical corroboration of requirements fulfillment rather than being a formal proving process.

After analysis of the literature with respect to the software testing field, two paradigms have been defined in order to highlight important concepts for soft real-time systems. The paradigms are based on an abstract stimuli/response model, which conceptualize a system with inputs and output. Since the system is perceived as a black box, its internal details are hidden and thus focus is placed on a more abstract level.

The first paradigm, the “input data paradigm”, is concerned about what data to input to the system. The second paradigm, the “input data mechanism paradigm” is concerned about how the data is sent, i.e. the actual input mechanism is focused. By specifying different dimensions associated with each paradigm, it is possible to define their unique characteristics. The advantage of this kind of theoretical construction is that each paradigm creates an unique sub-field with its own problems and techniques.

The problems defined for this thesis is primarily focused on the input data mechanism paradigm, where devised dimensions are applied. New verification techniques are deduced and analyzed based on general software testing principles. Based on the constructed theory, a test system architecture for the control system is developed. Finally, an implementation is constructed based on the architecture and a practical scenario. Its automation capability is then assessed.

The practical context for the thesis is a new simulator under development. It is based upon LabVIEW and PXI technology and handles over 200 I/O. Real machine components are connected to the environment, together with artificial components that simulate the engine, hydraulic systems and a forest. Additionally, physical control sticks and buttons are connected to the simulator to enable user testing of the machine being simulated.

The results associated with the thesis is first of all that usable verification techniques were deduced. Generally speaking, some of these techniques are scalable and are possible to apply for an entire system, while other techniques may be appropriate for selected subsets that needs extra attention. Secondly, an architecture for an automated test system based on a selection of techniques has been constructed for the control system.

Last but not least, as a result of this, an implementation of a general test system has been possible and successful. The implemented test system is based on both C# and LabVIEW. What remains regarding the implementation is primarily to extend the system to include the full scope of features described in the architecture and to enable result analysis.


Då verifikationstekniker för Komatu Forests styrsystem utreds angående Valmet skogsmaskiner, hamnar det mjuka realtidssystemkontextet i fokus. Ett sådant kontext antyder en process där empirisk styrkning av kravuppfyllande står i centrum framför formella bevisföringsprocesser.

Efter en genomgång och analys av litteratur för mjukvarutestområdet har två paradigmer definierats med avsikten att belysa viktiga concept för mjuka realtidssystem. Paradigmerna är baserade på en abstrakt stimuli/responsmodell, som beskriver ett system med in- och utdata. Eftersom detta system betraktas som en svart låda är inre detaljer gömda, vilket medför att fokus hamnar på ett mer abstrakt plan.

Det första paradigmet benämns som “indata-paradigmet” och inriktar sig på vilket data som skickas in i systemet. Det andra paradigmet går under namnet “indatamekanism-paradigmet” och behandlar hur datat skickas in i systemet, dvs fokus placeras på själva inskickarmekanismen. Genom att definiera olika

dimensioner för de två paradigmen, är det möjligt att beskriva deras utmärkande drag. Fördelen med att använda denna teoretiska konstruktion är att ett paradigm skapar ett eget teoriområde med sina egna frågeställningar och tekniker.

De problem som definierats för detta arbete är främst fokuserade på indatamekanism-paradigmet, där framtagna dimensioner tillämpas. Nya verifikationstekniker deduceras och analyseras baserat på generella mjukvarutestprinciper. Utifrån den skapade teorin skapas en testsystemarkitektur för kontrollsystemet. Sedan utvecklas ett testsystem baserat på arkitekturen samt ett praktiskt scenario med syftet att utreda systemets automationsgrad.

Den praktiska miljön för detta arbete kretsar kring en ny simulator under utveckling. Den är baserad på LabVIEW och PXI-teknik och hanterar över 200 I/O. Verkliga maskinkomponenter ansluts till denna miljö tillsammans med konstgjorda komponenter som simulerar motorn, hydralik samt en skog. Utöver detta, ansluts styrspakar och knappar för att möjliggöra användarstyrning av maskinen som simuleras.

Resultatet förknippat med detta arbete är för det första användbara verifikationstekniker. Man kan generellt säga att några av dessa tekniker är skalbara och därmed möjliga att tillämpa för ett helt system. Andra tekniker är ej skalbara, men lämpliga att applicera på en systemdelmängd som behöver testas mer utförligt.

För det andra, en arkitektur har konstruerats för kontrollsystemet baserat på ett urval av tekniker. Sist men inte minst, som en följd av ovanstående har en lyckad implementation av ett generellt testsystem utförts. Detta system implementerades med hjälp av C# och LabVIEW. Det som återstår beträffande implementationen är att utöka systemet så att alla funktioner som arkitekturen beskriver är inkluderade samt att införa resultatanalys.

APA, Harvard, Vancouver, ISO, and other styles
12

Cameron, Alan, Tony Cirineo, and Karl Eggertsen. "The Family of Interoperable Range System Transceivers (First)." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611408.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
The objective of the FIRST project is to define a modern DoD Standard Datalink capability. This defined capability or standard is to provide a solution to wide variety of test and training range digital data radio communications problems with a common set of components, flexible to fit a broad range of applications, yet be affordable in all of them. This capability is to be specially designed to meet the expanding range distances and data transmissions rates needed to test modern weapon systems. Presently, the primary focus of the project is more on software, protocols, design techniques and standards, than on hardware development. Existing capabilities, on going developments and emerging technologies are being investigated and will be utilized as appropriate. Modern processingintensive communications technology can perform many complex range data communications tasks effectively, but a large-scale development effort is usually necessary to exploit it to its full potential. Yet, range communications problems are generally of limited scope, so different from one another that a communication system applicable to all of them is not likely to solve any of them well. FIRST will resolve that dilemma by capitalizing on another feature of modern communications technology: its high degree of programmability. This can enable custom-tailoring of datalink operation to particular applications, just as a PC can be tailored to perform a multitude of diverse tasks, through appropriate selection of software and hardware components.
APA, Harvard, Vancouver, ISO, and other styles
13

Cosgrove, S. J. "Expert system technology applied to the testing of complex digital electronic architectures : TEXAS; a synergistic test strategy planning and functional test pattern generation methodology applicable to the design, development and testing of complex digit." Thesis, Brunel University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Pellegrino, Gregory S. "Design of a Low-Cost Data Acquisition System for Rotordynamic Data Collection." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/1978.

Full text
Abstract:
A data acquisition system (DAQ) was designed based on the use of a STM32 microcontroller. Its purpose is to provide a transparent and low-cost alternative to commercially available DAQs, providing educators a means to teach students about the process through which data are collected as well as the uses of collected data. The DAQ was designed to collect data from rotating machinery spinning at a speed up to 10,000 RPM and send this data to a computer through a USB 2.0 full-speed connection. Multitasking code was written for the DAQ to allow for data to be simultaneously collected and transferred over USB. Additionally, a console application was created to control the DAQ and read data, and MATLAB code written to analyze the data. The DAQ was compared against a custom assembled National Instruments CompactDAQ system. Using a Bentley-Nevada RK 4 Rotor Kit, data was simultaneously collected using both DAQs. Analysis of this data shows the capabilities and limitations of the low cost DAQ compared to the custom CompactDAQ.
APA, Harvard, Vancouver, ISO, and other styles
15

Garcia-Mardambek, Nouar. "Etude d'une stratégie de maintenance adaptative pour des systèmes logiques." Grenoble INPG, 1991. http://www.theses.fr/1991INPG0076.

Full text
Abstract:
La complexité des équipements numériques entraîne une difficulté croissante des tâches de vérification de ces équipements tout au long de leur vie: conception, production, industrialisation et maintenance. Chacune de ces étapes induit des modes de défaillances spécifiques et nécessite des méthodes de vérification adaptées. Alors que des méthodes et outils de génération de test ont été développés pour les étapes de conception et production, les vérifications en phase opérationnelle sont restées très empiriques, et font largement appel à l'expertise des ingénieurs de maintenance. C'est pourquoi nous nous sommes intéressés à la maintenance préventive/curative des systèmes numériques. L'objectif était de concevoir une stratégie qui organise l'exécution de fonctions élémentaires de test en fonction du type de vérification (prévention, correction) et des contraintes à prendre en compte. Cette stratégie fournit une spécification fonctionnelle du programme de maintenance. Le problème rencontré est équivalent à un problème de couverture d'une matrice booléenne. Nous avons proposé et validé: ― des méthodes de résolution classique: multiplication directe, branchement par ligne et par colonne, arbre sémantique; le problème à résoudre est équivalent à un problème de conversion d'une formule normale conjonctive en une formule normale équivalente disjonctive (CFN-DFN). C'est un problème NP-complet. Ces méthodes de résolution fournissent toutes les solutions minimales en minimisant la complexité des algorithmes appliqués et le temps de calcul nécessaire; ― une méthode heuristique fondée sur des techniques d'Intelligence Artificielle: ce type de résolution fournit une solution optimale par rapport à un ensemble de critères donnés. Un certain nombre de systèmes à base de techniques d'I. A. Ayant été développés dans les différents domaines du test: diagnostic, génération des stimuli, maintenance et réparation, nous avons mené une analyse comparative et fait une synthèse sur l'ensemble des solutions de type système expert. Le système OPS5, à base de productions, a été utilisé pour la programmation de ces différentes méthodes, et leur validation sur des cartes réelles
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Xiaokun. "A High Performance Advanced Encryption Standard (AES) Encrypted On-Chip Bus Architecture for Internet-of-Things (IoT) System-on-Chips (SoC)." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2477.

Full text
Abstract:
With industry expectations of billions of Internet-connected things, commonly referred to as the IoT, we see a growing demand for high-performance on-chip bus architectures with the following attributes: small scale, low energy, high security, and highly configurable structures for integration, verification, and performance estimation. Our research thus mainly focuses on addressing these key problems and finding the balance among all these requirements that often work against each other. First of all, we proposed a low-cost and low-power System-on-Chips (SoCs) architecture (IBUS) that can frame data transfers differently. The IBUS protocol provides two novel transfer modes – the block and state modes, and is also backward compatible with the conventional linear mode. In order to evaluate the bus performance automatically and accurately, we also proposed an evaluation methodology based on the standard circuit design flow. Experimental results show that the IBUS based design uses the least hardware resource and reduces energy consumption to a half of an AMBA Advanced High-Performance Bus (AHB) and Advanced eXensible Interface (AXI). Additionally, the valid bandwidth of the IBUS based design is 2.3 and 1.6 times, respectively, compared with the AHB and AXI based implementations. As IoT advances, privacy and security issues become top tier concerns in addition to the high performance requirement of embedded chips. To leverage limited resources for tiny size chips and overhead cost for complex security mechanisms, we further proposed an advanced IBUS architecture to provide a structural support for the block-based AES algorithm. Our results show that the IBUS based AES-encrypted design costs less in terms of hardware resource and dynamic energy (60.2%), and achieves higher throughput (x1.6) compared with AXI. Effectively dealing with the automation in design and verification for mixed-signal integrated circuits is a critical problem, particularly when the bus architecture is new. Therefore, we further proposed a configurable and synthesizable IBUS design methodology. The flexible structure, together with bus wrappers, direct memory access (DMA), AES engine, memory controller, several mixed-signal verification intellectual properties (VIPs), and bus performance models (BPMs), forms the basic for integrated circuit design, allowing engineers to integrate application-specific modules and other peripherals to create complex SoCs.
APA, Harvard, Vancouver, ISO, and other styles
17

TAI, YU-FENG, and 戴裕峰. "A Design of Digital Signature Based Application Verification System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/77f278.

Full text
Abstract:
碩士
國立高雄應用科技大學
資訊管理研究所碩士班
105
With the fast development of Internet and the widely adoptions of intelligent devices, APPs (Applications) have being playing a more and more role in our life. However, there may be hidden risks that threatens us when using APPs. Malicious applications may hide in third-party applications, especially in those applications with anonymous developers. Sometimes, malicious programs may be developed by using official Application Programming Interface (API). Malicious programs may also be developed by illegal way such as de-compiler. Taking information security into account, some developers restrict their APIs to be used for internal users only. However, there are still some ways for malicious programs to be generated, for example, decompiling, packet blocking. In this paper, a digital signature-based application verification system is proposed to assist servers to justify legal and official applications. Client application uses RSA for digital signature to encrypt a user defined data that is add onto the HTTP header and send server to verify. Therefore, server can block illegal third party access then achieve the effect of verification.
APA, Harvard, Vancouver, ISO, and other styles
18

Peng, Chien-huan, and 彭健桓. "Design and Implementation of an FPGA-based Verification System for the Built-In Self-Test Circuits of Logic Arrays." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/yudbt8.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
102
This thesis is related to the design and implementation of an FPGA-based verification system for the BIST (Built-In Self-Test) circuits of logic arrays. The related research work includes four parts: The first part is to explore the architecture for the verification system of the BIST circuits. After analyzing the BIST system, circuit under test, and fault injection methods, a verification system for the BIST circuits of logic arrays has been developed. The second part is to design and implement the hardware for the BIST verification system of logic arrays. This research work consists of designing circuits for adders, array multipliers, and fault injection and detection circuits. Therefore fault coverage can be evaluated. Finally the hardware designed above are integrated onto a single-chip field-programmable gate array and implemented on an Altera FPGA development board. The third part is about the hardware/software co-design and implementation of the verification system. Here Nios-II-related firmware is written and the Nios II IDE (Integrated Development Environment) is used to verify the function of the verification system. The fourth part is to simulate the faults by using the software and hardware independently to verify the run-time performance of the BIST verification system. On the whole, the goal of this thesis is to do researches on the design of a verification system for the BIST circuits of logic arrays. Meanwhile array multipliers are used as examples to implement on the FPGA development boards. After experimenting with multipliers of various bit widths this thesis has demonstrated that hardware simulation (or emulation) can be much more efficient than software simulation in the process of verifying the fault coverage of the BIST circuits.
APA, Harvard, Vancouver, ISO, and other styles
19

Chien-HongLin and 林建宏. "Autonomous Hovering Controller Design Using Sliding Mode Control Theory and Its Flight Test Verification for Small-scaled Unmanned Helicopter System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/46054056125287266943.

Full text
Abstract:
博士
國立成功大學
航空太空工程學系碩博士班
99
Unmanned helicopter has been demanding for certain applications due to its unique flight capability. The unmanned helicopter can take off and land within a limited space and it can hover and cruise at a very low speed. The autonomous hovering is one of the most significant flight maneuvering conditions for an unmanned helicopter and offers an unmanned helicopter a wide variety of applications. Thus, an autonomous hovering controller design based on sliding mode control (SMC) theory and its flight test verification for a small-scaled unmanned helicopter system are presented in this study. Owing to its unique properties, SMC theory has attracted a wide attention in the robust control field and these features are based on the existence of the so-called ideal sliding mode, which is achieved with the aid of discontinuous control. However, due to its physical limitations, the infinitely fast switching is difficult to be realized and may lead to undesirable control results. Thus, the twin sliding mode controller (TSMC) is designed with two separate proportional-integral-derivative boundary surfaces in order to reduce the chattering and improve the controllers' responses. Due to the simplicity of the TSMC structure, the proposed TSMC will cause no difficulty for users to realize it practically. In order to show how the TSMC may improve the system performance, this study develops an experimental unmanned helicopter system test-bed to assess the performance of the proposed controller. The simulation results of this work has validated that the tracking error of the TSMC is not only smaller but also converges quicker than the conventional SMC. Unlike the conventional SMC method, the proposed TSMC is capable of achieving the desired control qualities and the tracking performance. As shown in the flight test results, the 2-distance-root-mean-squared (2DRMS) position error is less than 5m. The flight test results are presented in the dissertation and they are found to be consistent with the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
20

Surendran, Sudhakar. "A Systematic Approach To Synthesis Of Verification Test-Suites For Modular SoC Designs." Thesis, 2006. http://hdl.handle.net/2005/397.

Full text
Abstract:
SoCs (System on Chips) are complex designs with heterogeneous modules (CPU, memory, etc.) integrated in them. Verification is one of the important stages in designing an SoC. Verification is the process of checking if the transformation from architectural specification to design implementation is correct. Verification involves creating the following components: (i) a testplan that identifies the conditions to be verified, (ii) a testcase that generates the stimuli to verify the conditions identified, and (iii) a test-bench that applies the stimuli and monitors the output from the design. Verification consumes upto 70% of the total design time. This is largely due to the complex and manual nature of the verification task. To reduce the time spent in verifying the design, the components used for verification can be generated automatically or created at an abstract level (to reduce the complexity) and reused. In this work we present a methodology to synthesize testcases from reusable code segments and abstract specifications. Our methodology consists of the following major steps: (i) identifying the structure of testcases, (ii) identifying code segments of testcases that can be reused from one SoC to another, (iii) identifying properties of an SoC and its modules that can be used to synthesize the SoC specific code segments of the testcase, and (iv) proposing a synthesizer that uses the code segments, the properties and the abstract specification to synthesize testcases. We discuss two specific classes of testcases. These are testcases for verifying the memory modules and the testcases for verifying the data transfer modules. These are considered since they form a significantly large subset of the device functionality. We implement a prototype testcase generator and also present an example to illustrate the use of methodology for each of these classes. The use of our methodology enables (i) the creation of testcases automatically that are correct by construction and (ii) reuse of the testcase code segments from one SoC to another. Some of the properties (of the modules and the SoC) presented in our work can be easily made part of the architectural specification, and hence, can further reduce the effort needed to create them.
APA, Harvard, Vancouver, ISO, and other styles
21

Pannell, Zachary William. "Design of a Highly Constrained Test System for a 12-bit, 16-channel Wilkinson ADC." 2009. http://trace.tennessee.edu/utk_gradthes/549.

Full text
Abstract:
Outer space is a very harsh environment that can cause electronics to not operate as they were originally intended. Aside from the extreme amount of radiation found in space, temperatures can also change very dramatically in a relatively small time frame. In order to test electronics that will be used in this environment, they first need to be tested on Earth under replicated conditions. Vanderbilt University designed a dewar that allows devices to be tested at these extreme temperatures while being radiated. For this thesis, a test setup that met all of the dewar's constraints was designed that would allow a 12-bit, 16-channel analog-to-digital converter to be tested while inside.
APA, Harvard, Vancouver, ISO, and other styles
22

Lata, Kusum. "Formal Verification Of Analog And Mixed Signal Designs Using Simulation Traces." Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1271.

Full text
Abstract:
The conventional approach to validate the analog and mixed signal designs utilizes extensive SPICE-level simulations. The main challenge in this approach is to know when all important corner cases have been simulated. An alternate approach is to use the formal verification techniques. Formal verification techniques have gained wide spread popularity in the digital design domain; but in case of analog and mixed signal designs, a large number of test scenarios need to be designed to generate sufficient simulation traces to test out all the specified system behaviours. Analog and mixed signal designs can be formally modeled as hybrid systems and therefore techniques used for formal analysis and verification of hybrid systems can be applied to the analog and mixed signal designs. Generally, formal verification tools for hybrid systems work at the abstract level where we model the systems in terms of differential equations or algebraic equations. However the analog and mixed signal system designers are very comfortable in designing the circuits at the transistor level. To bridge the gap between abstraction level verification and the designs validation which has been implemented at the transistor level, the very important issue we need to address is: Can we formally verify the circuits at the transistor level itself? For this we have proposed a framework for doing the formal verification of analog and mixed signal designs using SPICE simulation traces in one of the hybrid systems formal verification tools (i.e. Checkmate from CMU). An extension to a formal verification approach of hybrid systems is proposed to verify analog and mixed signal (AMS) designs. AMS designs can be formally modeled as hybrid systems and therefore lend themselves to the formal analysis and verification techniques applied to hybrid systems. The proposed approach employs simulation traces obtained from an actual design implementation of AMS circuit blocks (for example, in the form of SPICE netlists) to carry out formal analysis and verification. This enables the same platform used for formally validating an abstract model of an AMS design to be also used for validating its different refinements and design implementation, thereby providing a simple route to formal verification at different levels of implementation. Our approach has been illustrated through the case studies using simulation traces form the different frameworks i.e. Simulink/Stateflow framework and the SPICE simulation traces. We demonstrate the feasibility of our approach around the Checkmate and the case studies for hybrid systems and the analog and mixed signal designs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography