Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Functional verification of digital systems.

Dissertationen zum Thema „Functional verification of digital systems“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-36 Dissertationen für die Forschung zum Thema "Functional verification of digital systems" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Malkoc, Veysi. „Sequential alignment and position verification system for functional proton radiosurgery“. CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2535.

Der volle Inhalt der Quelle
Annotation:
The purpose of this project is to improve the existing version of the Sequential Alignment and Position Verification System (SAPVS) for functional proton radiosurgery and to evaluate its performance after improvement .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Prado, Bruno Otávio Piedade. „IVM: uma metodologia de verificação funcional interoperável, iterativa e incremental“. reponame:Repositório Institucional da UFS, 2009. https://ri.ufs.br/handle/riufs/1672.

Der volle Inhalt der Quelle
Annotation:
A crescente demanda por produtos eletrônicos e a capacidade cada vez maior de integração criaram sistemas extremamente complexos em chips, conhecidos como Systemon-Chip ou SoC. Seguindo em sentido oposto a esta tendência, os prazos (time-to-market) para que estes sistemas sejam construídos vem continuamente sendo reduzidos, obrigando que muito mais funcionalidades sejam implementadas em períodos cada vez menores de tempo. A necessidade de um maior controle de qualidade do produto final demanda a atividade de Verificação Funcional que consiste em utilizar um conjuntos de técnicas para estimular o sistema em busca de falhas. Esta atividade é a extremamente dispendiosa e necessária, consumindo até cerca de 80% do custo final do produto. É neste contexto que se insere este trabalho, propondo uma metodologia de Verificação Funcional chamada IVM que irá fornecer todos os subsídios para garantir a entrega de sistemas de alta qualidade, e ainda atingindo as rígidas restrições temporais impostas pelo mercado. Sendo baseado em metodologias já bastante difundidas e acreditadas, como o OVM e o VeriSC, o IVM definiu uma organização arquitetural e um fluxo de atividades que incorporou as principais características de ambas as abordagens que antes estavam disjuntas. Esta integração de técnicas e conceitos resulta em um fluxo de verificação mais eficiente, permitindo que sistemas atinjam o custo, prazo e qualidade esperados._________________________________________________________________________________________ ABSTRACT: The growing demand for electronic devices and its even higher integration capability created extremely complex systems in chips, known as System-on-Chip or SoC. In a opposite way to this tendency, the time-to-market for these systems be built have been continually reduced, forcing much more functionalities be implemented in even shorten time periods. The final product quality control is assured by the Functional Verification activity that consists in a set of techniques to stimulate a system in order to find bugs. This activity is extremely expensive and necessary, responding to around 80% of final product cost. In this context this work is inserted on, proposing a Functional Verification methodology called IVM that will provide all conditions to deliver high quality systems, while keeping the hard time restrictions imposed by the market. Based in well known and trusted methodologies, as OVM and VeriSC, the IVM defined an architectural organization and an activity flow that incorporates features of both approaches that were separated from each other. This techniques and concepts integration resulted in a more efficient verification flow, allowing systems to meet the desired budget, schedule and quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Vavro, Tomáš. „Periferie procesoru RISC-V“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445553.

Der volle Inhalt der Quelle
Annotation:
The RISC-V platform is one of the leaders in the computer and embedded systems industry. With the increasing use of these systems, the demand for available peripherals for the implementations of this platform is growing. This thesis deals with the FU540-C000 processor from SiFive company, which is one of the implementations of the RISC-V architecture, and its basic peripherals. Based on the analysis, an UART circuit for asynchronous serial communication was selected from the peripherals of this processor. The aim of this master thesis is to design and implement the peripheral in one of the languages for the description of digital circuits, and then create a verification environment, through which the functionality of the implementation will be verified.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Xuan. „Verification of digital controller implementations /“. Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd1073.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sobel, Ann E. Kelley. „Modular verification of concurrent systems /“. The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487267546983528.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Antti, William. „Virtualized Functional Verification of Cross-Platform Software Applications“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74599.

Der volle Inhalt der Quelle
Annotation:
With so many developers writing code, so many choose to become a developer every day, using tools to aid in the work process is needed. With all the testing being done for multiple different devices and sources there is a need to make it better and more efficient. In this thesis connecting the variety of different tools such as version control, project management, issue tracking and test systems is explored as a possible solution. A possible solution was implemented and then analyzed through a questionnaire that were answered by developers. For an example results as high as 75\% answering 5 if they liked the connection between the issue tracking system and the test results. 75\% also gave a 5 when asked about if they liked the way the test results were presented. The answers they gave about the implementation made it possible to conclude that it is possible to achieve a solution that can solve some of the presented problems. A better way to connect various tools to present and analyze the test results coming from multiple different sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ahmad, Manzoor. „Modeling and verification of functional and non functional requirements of ambient, self adaptative systems“. Phd thesis, Université Toulouse le Mirail - Toulouse II, 2013. http://tel.archives-ouvertes.fr/tel-00965934.

Der volle Inhalt der Quelle
Annotation:
The overall contribution of this thesis is to propose an integrated approach for modeling and verifying the requirements of Self Adaptive Systems using Model Driven Engineering techniques. Model Driven Engineering is primarily concerned with reducing the gap between problem and software implementation domains through the use of technologies that support systematic transformation of problem level abstractions to software implementations. By using these techniques, we have bridged this gap through the use of models that describe complex systems at multiple levels of abstraction and through automated support for transforming and analyzing these models. We take requirements as input and divide it into Functional and Non Functional Requirements. We then use a process to identify those requirements that are adaptable and those that cannot be changed. We then introduce the concepts of Goal Oriented Requirements Engineering for modeling the requirements of Self Adaptive Systems, where Non Functional Requirements are expressed in the form of goals which is much more rich and complete in defining relations between requirements. We have identified some problems in the conventional methods of requirements modeling and properties verification using existing techniques, which do not take into account the adaptability features associated with Self Adaptive Systems. Our proposed approach takes into account these adaptable requirements and we provide various tools and processes that we developed for the requirements modeling and verification of Self Adaptive Systems. We validate our proposed approach by applying it on two different case studies in the domain of Self Adaptive Systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Karimibiuki, Mehdi. „Post-silicon code coverage for functional verification of systems-on-chip“. Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42967.

Der volle Inhalt der Quelle
Annotation:
Post-silicon validation requires effective techniques to better evaluate the functional correctness of modern systems-on-chip. Coverage is the standard measure for validation effectiveness and is extensively used pre-silicon. However, there is little data evaluating the coverage of post-silicon validation efforts on industrial-scale designs. This thesis addresses this knowledge-gap. We employ code coverage, which is one of the most frequently used coverage technique in simulation, and apply it post-silicon. To show our coverage methodology in practice, we employ an industrial-size open source SoC that is based on the SPARC architecture and is synthesizable to FPGA. We instrument code coverage in a number of IP cores and boot Linux as our experiment to evaluate coverage --- booting an OS is a typical industrial post-silicon test. We also compare coverages between pre-silicon directed tests and the post-silicon Linux boot. Our results show that in some blocks, the pre-silicon and post-silicon tests can achieve markedly different coverage figures --- in one block we measured over 50 percentage point coverage difference between the pre- and post-silicon results, which signifies the importance of post-silicon coverage. Moreover, we calculate the area overhead imposed by the additional coverage circuitry on-chip. We apply state-of-the-art software analysis techniques to reduce the excessively large overhead yet preserve data accuracy. The results in this thesis are valuable data for guidance to future research in post-silicon coverage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Kriouile, Abderahman. „Formal methods for functional verification of cache-coherent systems-on-chip“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM041/document.

Der volle Inhalt der Quelle
Annotation:
Les architectures des systèmes sur puce (System-on-Chip, SoC) actuelles intègrent de nombreux composants différents tels que les processeurs, les accélérateurs, les mémoires et les blocs d'entrée/sortie, certains pouvant contenir des caches. Vu que l'effort de validation basée sur la simulation, actuellement utilisée dans l'industrie, croît de façon exponentielle avec la complexité des SoCs, nous nous intéressons à des techniques de vérification formelle. Nous utilisons la boîte à outils CADP pour développer et valider un modèle formel d'un SoC générique conforme à la spécification AMBA 4 ACE récemment proposée par ARM dans le but de mettre en œuvre la cohérence de cache au niveau système. Nous utilisons une spécification orientée contraintes pour modéliser les exigences générales de cette spécification. Les propriétés du système sont vérifié à la fois sur le modèle avec contraintes et le modèle sans contraintes pour détecter les cas intéressants pour la cohérence de cache. La paramétrisation du modèle proposé a permis de produire l'ensemble complet des contre-exemples qui ne satisfont pas une certaine propriété dans le modèle non contraint. Notre approche améliore les techniques industrielles de vérification basées sur la simulation en deux aspects. D'une part, nous suggérons l'utilisation du modèle formel pour évaluer la bonne construction d'une unité de vérification d'interface. D'autre part, dans l'objectif de générer des cas de test semi-dirigés intelligents à partir des propriétés de logique temporelle, nous proposons une approche en deux étapes. La première étape consiste à générer des cas de tests abstraits au niveau système en utilisant des outils de test basé sur modèle de la boîte à outils CADP. La seconde étape consiste à affiner ces tests en cas de tests concrets au niveau de l'interface qui peuvent être exécutés en RTL grâce aux services d'un outil commercial de génération de tests dirigés par les mesures de couverture. Nous avons constaté que notre approche participe dans la transition entre la vérification du niveau interface, classiquement pratiquée dans l'industrie du matériel, et la vérification au niveau système. Notre approche facilite aussi la validation des propriétés globales du système, et permet une détection précoce des bugs, tant dans le SoC que dans les bancs de test commerciales
State-of-the-art System-on-Chip (SoC) architectures integrate many different components, such as processors, accelerators, memories, and I/O blocks. Some of those components, but not all, may have caches. Because the effort of validation with simulation-based techniques, currently used in industry, grows exponentially with the complexity of the SoC, this thesis investigates the use of formal verification techniques in this context. More precisely, we use the CADP toolbox to develop and validate a generic formal model of a heterogeneous cache-coherent SoC compliant with the recent AMBA 4 ACE specification proposed by ARM. We use a constraint-oriented specification style to model the general requirements of the specification. We verify system properties on both the constrained and unconstrained model to detect the cache coherency corner cases. We take advantage of the parametrization of the proposed model to produce a comprehensive set of counterexamples of non-satisfied properties in the unconstrained model. The results of formal verification are then used to improve the industrial simulation-based verification techniques in two aspects. On the one hand, we suggest using the formal model to assess the sanity of an interface verification unit. On the other hand, in order to generate clever semi-directed test cases from temporal logic properties, we propose a two-step approach. One step consists in generating system-level abstract test cases using model-based testing tools of the CADP toolbox. The other step consists in refining those tests into interface-level concrete test cases that can be executed at RTL level with a commercial Coverage-Directed Test Generation tool. We found that our approach helps in the transition between interface-level and system-level verification, facilitates the validation of system-level properties, and enables early detection of bugs in both the SoC and the commercial test-bench
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Li, Lun. „Integrated techniques for the formal verification and validation of digital systems“. Ann Arbor, Mich. : ProQuest, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3214772.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D. in Computer Engineering)--S.M.U.
Title from PDF title page (viewed July 10, 2007). Source: Dissertation Abstracts International, Volume: 67-04, Section: B, page: 2151. Adviser: Mitchell A. Thornton. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Håkansson, Johannes. „Plant Model Generator from Digital Twin for Purpose of Formal Verification“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-83360.

Der volle Inhalt der Quelle
Annotation:
This master thesis will cover a way to automatically generate a formal model for plant verification from plant traces. The solution will be developed from trace data, stemming from a model of a digital twin of a physical plant. The final goal is to automatically generate a formal model of the plant that can be used for model checking for verifying the safety and functional properties of the actual plant. The solution for this specific setup will be generalized and a general approach for other systems will be discussed. Furthermore, state machine generation will be introduced. This includes generating state machine data from traces, and in the future is planned be used as an intermediate step between the trace data and model generation. The digital twin solution used in this project is a joint setup in Visual Components and nxtSTUDIO. The symbolic model checker NuSMV is utilized in order to verify the functional properties of the plant.
I detta examensarbete utforskas ett sätt att generera formella modeller av en process via inspelningar av dennes beteende. Lösningen är utvecklad från data över processens beteende, som tas upp av en digital tvilling. Det slutgiltliga målet är att med hjälp av den digitala tvillingen automatiskt generera en modell som kan användas för att verifiera säkerhet och funktioner för den riktiga processen. Lösningen blir sedan generaliserad för att i framtiden kunna bli applicerad på andra processer. Ett sätt att generera tillståndsmaskiner kommer läggas fram. Detta sätt kommer generera data för tillståndsmaskinerna genom den digitala tvillingens beteende och i framtiden planeras att användas som ett mellansteg för att generera de slutliga modellerna.  Den digitala tvillingen som används i det här projektet är implementerat av Aalto universitet, och i flera program. Den visuella delen, som även spelar in tvillingens beteende, är implementerad i Visual Components. En kontroll för den digitala tvillingen är gjord i nxtSTUDIO. Verktyget för att verifiera modellens säkerhet och funktioner är gjord i NuSMV.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Carlsson, Daniel. „Development of an ISO 26262 ASIL D compliant verification system“. Thesis, Linköpings universitet, Programvara och system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-90109.

Der volle Inhalt der Quelle
Annotation:
In 2011 a new functional safety standard for electronic and electrical systems in vehicles waspublished, called ISO 26262. This standard concerns the whole lifecycle of the safety criticalelements used in cars, including the development process of such elements. As the correctnessof the tools used when developing such an element is critical to the safety of the element,the standard includes requirements concerning the software tools used in the development,including verification tools. These requirements mainly specify that a developer of a safetycritical element should provide proof of their confidence in the software tools they are using.One recommended way to gain this confidence is to use tools developed in accordance to a“relevant subset of [ISO 26262]”.This project aims to develop a verification system in accordance to ISO 26262, exploringhow and what specifications should be included in this “relevant subset” of ISO 26262 andto which extent these can be included in their current form. The work concludes with thedevelopment of a single safety element of the verification system, to give an demonstrationof the viability of such a system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Ocean, Michael James. „The Sensor Network Workbench: Towards Functional Specification, Verification and Deployment of Constrained Distributed Systems“. Boston University Computer Science Department, 2009. https://hdl.handle.net/2144/1713.

Der volle Inhalt der Quelle
Annotation:
As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Gupta, Anil K. „Functional fault modeling and test vector development for VLSI systems“. Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/90932.

Der volle Inhalt der Quelle
Annotation:
The attempts at classification of functional faults in VLSI chips have not been very successful in the past. The problem is blown out of proportions because methods used for testing have not evolved at the same pace as the technology. The fault-models proposed for LSI systems are no longer capable of testing VLSI devices efficiently. Thus the stuck-at and short/open fault models are outdated. Despite this fact, these old models are used in the industry with some modifications. Also, these gate-level fault models are very time-consuming and costly to run on the mainframe computers. In this thesis, a new method is developed for fault modeling at the functional level. This new method called 'Model Perturbation' is shown to be very simple and viable for automation. Some general sets of rules are established for fault selection and insertion. Based on the functional fault model introduced, a method of test vector development is formulated. Finally, the results obtained from functional fault simulation are related to gate level coverage. The validity and simplicity of using these models for combinational and sequential VLSI circuits is discussed. As an example, the modeling of IBM's AMAC chip, the work on which was done under contract YD 190121, is described.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ku, Hyunchul. „Behavioral modeling of nonlinear RF power amplifiers for digital wireless communication systems with implications for predistortion linearization systems“. Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04052004-180035/unrestricted/ku%5Fhyunchul%5F200312%5Fphd.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Hirose, Takayuki. „Envisioning Emergent Behaviors of Socio-Technical Systems Based on Functional Resonance Analysis Method“. Kyoto University, 2020. http://hdl.handle.net/2433/259040.

Der volle Inhalt der Quelle
Annotation:
付記する学位プログラム名: デザイン学大学院連携プログラム
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第22772号
工博第4771号
新制||工||1746(附属図書館)
京都大学大学院工学研究科機械理工学専攻
(主査)教授 椹木 哲夫, 教授 松原 厚, 教授 小森 雅晴
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Gerber, Matthew. „FORMALIZATION OF INPUT AND OUTPUT IN MODERN OPERATING SYSTEMS: THE HADLEY MODEL“. Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3133.

Der volle Inhalt der Quelle
Annotation:
We present the Hadley model, a formal descriptive model of input and output for modern computer operating systems. Our model is intentionally inspired by the Open Systems Interconnection model of networking; I/O as a process is defined as a set of translations between a set of computer-sensible forms, or layers, of information. To illustrate an initial application domain, we discuss the utility of the Hadley model and a potential associated I/O system as a tool for digital forensic investigators. To illustrate practical uses of the Hadley model we present the Hadley Specification Language, an essentially functional language designed to allow the translations that comprise I/O to be written in a concise format allowing for relatively easy verifiability. To further illustrate the utility of the language we present a read/write Microsoft DOS FAT12 and read-only Linux ext2 file system specification written in the new format. We prove the correctness of the read-only side of these descriptions. We present test results from operation of our HSL-driven system both in user mode on stored disk images and as part of a Linux kernel module allowing file systems to be read. We conclude by discussing future directions for the research.
Ph.D.
School of Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

von, Wenckstern Michael [Verfasser]. „Verification of Structural and Extra-Functional Properties in Component and Connector Models for Embedded and Cyber-Physical Systems / Michael von Wenckstern“. Düren : Shaker, 2020. http://d-nb.info/1208599623/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Flodmark, Erik, und Carl Sävendahl. „Managing a digital transformation : A case study of digitizing functional operations in a sociotechnical system“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300385.

Der volle Inhalt der Quelle
Annotation:
Sweden has the ambition to be the world leading country leveraging the opportunities of digitalization in the healthcare sector. In parallel, the Swedish Research Council highlights that conducting more clinical studies is essential to improve the healthcare. Henceforth, considering the need for increased operational efficiency as an enabler for increased clinical activity, a digital transformation of the industry was identified as a potential catalyst. The study thus utilizes a cognitive work analysis framework to investigate the potential benefits and risks of digitizing the functional operations at a contract management department for clinical studies at a Swedish university hospital. The aim is thereafter to determine the appropriate properties necessary to consider managing a digital transformation. The analysis identified three key benefits from a digitization. 1) transparent data sharing, 2) standardized contract management and 3) efficient operations. These three aspects are currently insufficient at the department hindering the objective of increasing the clinical activity. The study found that a digital transformation would be suitable in order to mitigate these insufficiencies and consequently facilitating the achievement of the objectives. Thereafter, the study found the key properties to consider managing a digital transformation to be interoperability, quality, adaptability and usability. In addition, safety was found critical to be considered in the transformation as the contract management department acts under rigid laws and regulations on ethics and patient security with which digitized processes must comply. The results contribute to the field of cognitive systems engineering. However, the study has limitations regarding the reliability and generalizability of the results. The findings are based on a single case study, which may not be representative for the industry in general nor for university hospitals in particular. In addition, since no actual digitalization effort was performed at the organization during the study, appropriate properties key to consider in the digital transformation are speculative by design. Consequently, it is necessary to study an actual implementation process in future research and whether the proposed considerations are sufficient in order to realize the suggested benefits of such a digitalization.
Sverige har ett övergripande mål att vara det ledande landet när det gäller att dra nytta av digitaliseringens möjligheter inom sjukvården. Dessutom understryker Vetenskapsrådet att det är centralt för förbättrad sjukvård att öka antalet kliniska studier i landet. Följaktligen, med tanke på behovet av en ökad operativ effektivitet, identifierade författarna det av intresse att studera digitalisering av branschen. Studien tillämpar således ett kognitivt ramverk för arbetsanalys i syfte att undersöka de potentiella fördelarna eller riskerna med att digitalisera den funktionella verksamheten hos en kontrakthanteringsavdelning för kliniska studier vid ett stort svenskt universitetssjukhus. Målsättningen är därefter att ta fram lämpliga egenskaper som är nödvändiga att beakta vid hanteringen av den digitala transformationen. Kontrakthanteringsavdelningen fanns att inneha brister i sina arbetsprocesser gällande transparens, effektivitet och standardisering vilket hindrar målet avseende ökad klinisk aktivitet. Studien visade att en digital transformation skulle vara nödvändig för att motverka dessa brister, samt för att möjliggöra en uppskalning av organisationen. Ett annat specifikt förbättringsområde som skulle underlättas av en digital transformation visade sig vara förbättrad synkronisering mellan arbetsprocesser. Vidare fann studien att de mest kritiska egenskaperna nödvändiga att beakta, vid hantering av en digital transformation, skulle vara interoperabilitet, kvalitet, anpassningsförmåga och användbarhet. Dessutom är säkerhet en egenskap som visat sig vara kritisk att beakta vid digitalisering då kontrakthanteringsavdelningen lyder under stränga lagar och föreskrifter beträffande etik och patientsäkerhet. Resultaten bidrar till forskningsområdet cognitive systems engineering. Studien har dock vissa begränsningar gällande tillförlitlighet och generaliserbarhet. Resultaten är baserade på en enfallstudie, som eventuellt inte är representativ för branschen i allmänhet eller för universitetssjukhus i synnerhet. Dessutom, då ingen digitaliseringsinsats utfördes under studien är de viktiga egenskaperna att beakta i den digitala transformationen enbart spekulativa. Således är det i framtida forskning viktigt att studera en faktisk implementation och då studera om föreslagna beaktanden är tillräckliga för att utnyttja digitaliseringens möjligheter.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

OLIVEIRA, Herder Fernando de Araújo. „BVM: Reformulação da metodologia de verificação funcional VeriSC“. Universidade Federal de Campina Grande, 2010. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1559.

Der volle Inhalt der Quelle
Annotation:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-27T17:42:49Z No. of bitstreams: 1 HELDER FERNANDO DE ARAUJO OLIVEIRA - DISSERTAÇÃO PPGCC 2010..pdf: 2110687 bytes, checksum: 5d2a2c0f6c5039c3f21dd8219d20f122 (MD5)
Made available in DSpace on 2018-08-27T17:42:49Z (GMT). No. of bitstreams: 1 HELDER FERNANDO DE ARAUJO OLIVEIRA - DISSERTAÇÃO PPGCC 2010..pdf: 2110687 bytes, checksum: 5d2a2c0f6c5039c3f21dd8219d20f122 (MD5) Previous issue date: 2010-06-16
O processo de desenvolvimento de um circuito digital complexo pode ser composto por diversas etapas. Uma delas é a verificação funcional. Esta etapa pode ser considerada uma das mais importantes, pois tem como objetivo demonstrar que as funcionalidades do circuito a ser produzido estão em conformidade com a sua especificação. Porém, além de ser uma fase com grande consumo de recursos, a complexidade da verificação funcional cresce diante da complexidade do hardware a ser verificado. Desta forma, o uso de uma metodologia de verificação funcional eficiente e de ferramentas que auxiliem o engenheiro de verificação funcional são de grande valia. Neste contexto, este trabalho realiza uma reformulação da metodologia de verificação funcional VeriSC, originando uma nova metodologia, denominada BVM (Brazil-IP Verification Methodology). VeriSC é implementada em SystemC e utiliza as bibliotecas SCV (SystemC Verification Library) e BVE (Brazil-IP Verification Extensions), enquanto BVM é implementada em SystemVerilog e baseada em conceitos e biblioteca de OVM (Open Verification Methodology). Além disto, este trabalho visa a adequação da ferramenta de apoio à verificação funcional eTBc (Easy Testbench Creator) para suportar BVM. A partir do trabalho realizado, é possível constatar, mediante estudos de caso no âmbito do projeto Brazil-IP, que BVM traz um aumento da produtividade do engenheiro de verificação na realização da verificação funcional, em comparação à VeriSC
The development process of a complex digital circuit can consist of several stages. One of them is the functional verification. This stage can be considered one of the most important because it aims to demonstrate that a circuit functionality to be produced is in accordance with its specification. However, besides being a stage with large consumption of resources, the complexity of functional verification grows according to the complexity of the hardware to be verified. Thus, the use of an effective functional verification methodology and tools to help engineer the functional verification are of great value. Within this context, this work proposes a reformulation of the functional verification methodology VeriSC, resulting in a new methodology called BVM (Brazil-IP Verification Methodology). VeriSC is implemented in SystemC and uses the SCV (SystemC Verification Library) and BVE (Brazil-IP Verification Extensions) libraries, while BVM is implemented and based on SystemVerilog and OVM (Open Verification Methodology) concepts and library. Furthermore, this study aims the adequacy of the functional verification tool eTBc (testbench Easy Creator), to support BVM. From this work it can be seen, based on case studies under the Brazil-IP project, that BVM increase the productivity of the engineer in the functional verification stage when compared to VeriSC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Cong, Kai. „Post-silicon Functional Validation with Virtual Prototypes“. Thesis, Portland State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3712209.

Der volle Inhalt der Quelle
Annotation:

Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration.

We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages.

For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Huynh, Nguyen. „Digital control and monitoring methods for nonlinear processes“. Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-100906-083012/.

Der volle Inhalt der Quelle
Annotation:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.
Keywords: Parametric optimization; nonlinear dynamics; functional equations; chemical reaction system dynamics; time scale multiplicity; robust control; nonlinear observers; invariant manifold; process monitoring; Lyapunov stability. Includes bibliographical references (leaves 92-98).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Guatto, Adrien. „A synchronous functional language with integer clocks“. Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE020/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse traite de la conception et implémentationd’un langage de programmation pour les systèmes detraitement de flux en temps réel, comme l’encodagevidéo. Le modèle des réseaux de Kahn est bien adaptéà ce domaine et y est couramment utilisé. Dans cemodèle, un programme consiste en un ensemble deprocessus parallèles communicant à travers des filesmono-producteur, mono-consommateur. La force dumodèle réside en son déterminisme.Les langages synchrones fonctionnels comme Lustresont dédiés aux systèmes embarqués critiques. Un programmeLustre définit un réseau de Kahn synchronequi peut être exécuté avec des files bornées et sans blocage.Cette propriété est garantie par un système detypes dédié, le calcul d’horloge, qui établit une échellede temps globale à un programme. Cette échelle detemps globale est utilisée pour définir les horloges, sé-quences booléennes indiquant pour chaque file, et àchaque pas de temps, si un processus produit ou consommeune donnée. Cette information sert non seulementà assurer la synchronie mais également à générerdu logiciel ou matériel à état fini.Nous proposons et étudions les horloges entières, unegénéralisation des horloges booléennes autorisant desentiers naturels arbitrairement grands. Les horlogesentières décrivent la production ou consommation deplusieurs valeurs depuis une même file au cours d’uninstant. Nous les utilisons pour définir la constructiond’échelle de temps locale, qui peut masquer despas de temps cachés par un sous-programme au contexteenglobant.Ces principes sont intégrés à un calcul d’horloge pourun langage fonctionnel d’ordre supérieur. Nous étudionsses propriétés et prouvons en particulier que lesprogrammes bien typés ne bloquent pas. Nous compilonsles programmes typés vers des circuits numériquessynchrones en adaptant le schéma de générationde code dirigé par les horloges de Lustre. L’informationde typage contrôle certains compromis entre temps etespace dans les circuits générés
This thesis addresses the design and implementationof a programming language for real-time streaming applications,such as video decoding. The model of Kahnprocess networks is a natural fit for this area and hasbeen used extensively. In this model, a program consistsin a set of parallel processes communicating via singlereader, single writer queues. The strength of the modellies in its determinism.Synchronous functional languages such as Lustre arededicated to critical embedded systems. A Lustre programdefines a synchronous Kahn process network, thatis, which can be executed using finite queues and withoutdeadlocks. This is enforced by a dedicated type system,the clock calculus, which establishes a global timescale throughout a program. The global time scale isused to define clocks: per-queue boolean sequences indicating,for each time step, whether a process producesor consumes a token in the queue. This information isused both for enforcing synchrony and for generatingfinite-state software or hardware.We propose and study integer clocks, a generalizationof boolean clocks featuring arbitrarily big natural numbers.Integer clocks model the production or consumptionof several values from the same queue in the courseof a time step. We then rely on integer clocks to definethe local time scale construction, which may hide timesteps performed by a sub-program from the surroundingcontext.These principles are integrated into a clock calculus fora higher-order functional language. We study its properties,proving among other results that well-typed programsdo not deadlock. We adjust the clock-directedcode generation scheme of Lustre to generate finite-statedigital synchronous circuits from typed programs. Thetyping information controls certain trade-offs betweentime and space in the generated circuits
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

SILVEIRA, George Sobral. „Uma abordagem para suporte à verificação funcional no nível de sistema aplicada a circuitos digitais que empregam a Técnica Power Gating“. Universidade Federal de Campina Grande, 2012. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/2146.

Der volle Inhalt der Quelle
Annotation:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-11-07T17:16:29Z No. of bitstreams: 1 GEORGE SOBRAL SILVEIRA - TESE PPGEE 2012..pdf: 4756019 bytes, checksum: 743307d8794218c3a447296994c05332 (MD5)
Made available in DSpace on 2018-11-07T17:16:29Z (GMT). No. of bitstreams: 1 GEORGE SOBRAL SILVEIRA - TESE PPGEE 2012..pdf: 4756019 bytes, checksum: 743307d8794218c3a447296994c05332 (MD5) Previous issue date: 2012-08-10
Capes
A indústria de semicondutores tem investido fortemente no desenvolvimento de sistemas complexos em um único chip, conhecidos como SoC (System-on-Chip). Com os diversos recursos adicionados ao SoC, ocorreu o aumento da complexidade no fluxo de desenvolvimento, principalmente no processo de verificação e um aumento do seu consumo energético. Entretanto, nos últimos anos, aumentou a preocupação com a energia consumida por dispositivos eletrônicos. Dentre as diversas técnicas utilizadas para reduzir o consumo de energia, Power Gating tem se destacado pela sua eficiência. Ultimamente, o processo de verificação dessa técnica vem sendo executado no nível de abstração RTL (Register TransferLevel), com base nas tecnologias CPF (Common Power Format) e UPF (Unified Power Format). De acordo com a literatura, as tecnologias que oferecem suporte a CPF e UPF, e baseadas em simulações, limitam a verificação até o nível de abstração RTL. Nesse nível, a técnica de Power Gating proporciona um considerável aumento na complexidade do processo de verificação dos atuais SoC. Diante desse cenário, o objetivo deste trabalho consiste em uma abordagem metodológica para a verificação funcional no nível ESL (Electronic System-Level) e RTL de circuitos digitais que empregam a técnica de Power Gating, utilizando uma versão modificada do simulador OSCI (Open SystemC Initiative). Foram realizados quatro estudos de caso e os resultados demonstraram a eficácia da solução proposta.
The semiconductor industry has strongly invested in the development of complex systems on a single chip, known as System-on-Chip (SoC), which are extensively used in portable devices. With the many features added to SoC, there has been an increase of complexity in the development flow, especially in the verification process, and an increase in SoC power consumption. However, in recent years, the concern about power consumption of electronic devices, has increased. Among the different techniques to reduce power consumption, Power Gating has been highlighted for its efficiency. Lately, the verification process of this technique has been executed in Register Transfer-Level (RTL) abstraction, based on Common Power Format (CPF) and Unified Power Format (UPF) . The simulators which support CPF and UPF limit the verification to RTL level or below. At this level, Power Gating accounts for a considerable increase in complexity of the SoC verification process. Given this scenario, the objective of this work consists of an approach to perform the functional verification of digital circuits containing the Power Gating technique at the Electronic System Level (ESL) and at the Register Transfer Level (RTL), using a modified Open SystemC Initiative (OSCI) simulator. Four case studies were performed and the results demonstrated the effectiveness of the proposed solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

BURKHARDT, ELLEN. „Optimization and investment decisions of electrical motors’ production line using discrete event simulation“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280294.

Der volle Inhalt der Quelle
Annotation:
More dynamic markets, shorter product life cycles and comprehensive variant management are challenges that dominate today's market. These maxims apply to the automotive sector, which is currently highly exposed to trade wars, changing mobility patterns and the emergence of new technologies and competitors. To meet these challenges, this thesis presents the creation of a digital twin of an existing production line of electric motors using discrete event simulation. Based on a detailed literature research, a step-by-step establishment of the simulation model of the production line using the software Plant Simulation is presented and argued. Finally, different experiments are carried out with the created model to show how a production line can be examined and optimized by means ofsimulation using different parameters. Within the scope of the different experiments regarding the number of workpiece carriers, number of operators as well as buffer sizes, the line was examined concerning the increase of the output. Furthermore, the simulation model was used to make decisions for future investments in additional XXX machines. Four different scenarios were examined and optimized. By examining the different parameters, optimization potentials of XXX% in the first scenario and up to XXX% in the fourth scenario were achieved. Finally, it was proven that the developed simulation model can be used as a tool for optimizing an existing production line and can generate useful investment information. Beyond that, the development of the simulation model can be employed to investigate further business questions at hand for the specific production line in question.
Mer dynamiska marknader, kortare produktlivscykler och omfattande varianthantering är utmaningar som dominerar dagens marknad. Dessa maximer gäller bilindustrin, som för närvarande är mycket utsatt för handelskrig, förändrade rörlighetsmönster och framväxten av ny teknik och nya konkurrenter. För att möta dessa utmaningar innebär denna avhandling skapandet av en digital tvilling av en befintlig produktionslinje av elmotorer med diskret händelsesimulering. Baserat på en detaljerad litteraturforskning presenteras och argumenteras en steg-för-steg-etablering av simuleringsmodellen för produktionslinjen med hjälp av programvaran Plant Simulation. Slutligen utförs olika experiment med den skapade modellen för att visa hur en produktionslinje kan undersökas och optimeras med hjälp av simulering med hjälp av olika parametrar. Inom ramen för de olika experimenten när det gäller antalet arbetsstyckesbärare, antalet operatörer samt buffertstorlekar undersöktes linjen om ökningen av produktionen. Dessutom användes simuleringsmodellen för att fatta beslut för framtida investeringar i ytterligare hårnålsmaskiner. Fyra olika scenarier undersöktes och optimerades. Genom att undersöka de olika parametrarna uppnåddes optimeringspotentialer på XXX % i det första scenariot och upp till XXX % i det fjärde scenariot. Slutligen bevisades det att den utvecklade simuleringsmodellen kan användas som ett verktyg för att optimera en befintlig produktionslinje och kan generera användbar investeringsinformation. Utöver detta kan utvecklingen av simuleringsmodellen användas för att undersöka ytterligare affärsfrågor till hands för den specifika produktionslinjen i fråga.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Silva, Junior José Cláudio Vieira e. „Verificação de Projetos de Sistemas Embarcados através de Cossimulação Hardware/Software“. Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7856.

Der volle Inhalt der Quelle
Annotation:
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-16T14:54:49Z No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5)
Made available in DSpace on 2016-02-16T14:54:49Z (GMT). No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5) Previous issue date: 2015-08-17
Este trabalho propõe um ambiente para verificação de sistemas embarcados heterogêneos através da cossimulação distribuída. A verificação ocorre de maneira síncrona entre o software do sistema e o sistema embarcado usando a High Level Architecture (HLA) como middeware. A novidade desta abordagem não é apenas fornecer suporte para simulações, mas também permitir a integração sincronizada com todos os dispositivos de hardware físico. Neste trabalho foi utilizado o Ptolemy como uma plataforma de simulação. A integração do HLA com Ptolemy e os modelos de hardware abre um vasto conjunto de aplicações, como o de teste de vários dispositivos ao mesmo tempo, executando os mesmos, ou diferentes aplicativos ou módulos, a execução de multiplos dispositivos embarcados para a melhoria de performance. Além disso a abordagem de utilização do HLA, permite que sejam interligados ao ambiente, qualquer tipo de robô, assim como qualquer outro simulador diferente do Ptolemy. Estudo de casos são apresentado para provar o conceito, mostrando a integração bem sucedida entre o Ptolemy e o HLA e a verificação de sistemas utilizando Hardware-in-the-loop e Robot-in-the-loop.
This work proposes an environment for verification of heterogeneous embedded systems through distributed co-simulation. The verification occurs in real-time co-simulating the system software and hardware platform using the High Level Architecture (HLA) as a middleware. The novelty of this approach is not only providing support for simulations, but also allowing the synchronous integration with any physical hardware devices. In this work we use the Ptolemy framework as a simulation platform. The integration of HLA with Ptolemy and the hardware models open a vast set of applications, like the test of many devices at the same time, running the same, or different applications or modules, the usage of Ptolemy for real-time control of embedded systems and the distributed execution of different embedded devices for performance improvement. Furthermore the use of HLA approach allows them to be connected to the environment, any type of robot, as well as any other Ptolemy simulations. Case studies are presented to prove the concept, showing the successful integration between Ptolemy and the HLA and verification systems using Hardware-in-the-loop and Robot-in-the-loop.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Yang, Xiaokun. „A High Performance Advanced Encryption Standard (AES) Encrypted On-Chip Bus Architecture for Internet-of-Things (IoT) System-on-Chips (SoC)“. FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2477.

Der volle Inhalt der Quelle
Annotation:
With industry expectations of billions of Internet-connected things, commonly referred to as the IoT, we see a growing demand for high-performance on-chip bus architectures with the following attributes: small scale, low energy, high security, and highly configurable structures for integration, verification, and performance estimation. Our research thus mainly focuses on addressing these key problems and finding the balance among all these requirements that often work against each other. First of all, we proposed a low-cost and low-power System-on-Chips (SoCs) architecture (IBUS) that can frame data transfers differently. The IBUS protocol provides two novel transfer modes – the block and state modes, and is also backward compatible with the conventional linear mode. In order to evaluate the bus performance automatically and accurately, we also proposed an evaluation methodology based on the standard circuit design flow. Experimental results show that the IBUS based design uses the least hardware resource and reduces energy consumption to a half of an AMBA Advanced High-Performance Bus (AHB) and Advanced eXensible Interface (AXI). Additionally, the valid bandwidth of the IBUS based design is 2.3 and 1.6 times, respectively, compared with the AHB and AXI based implementations. As IoT advances, privacy and security issues become top tier concerns in addition to the high performance requirement of embedded chips. To leverage limited resources for tiny size chips and overhead cost for complex security mechanisms, we further proposed an advanced IBUS architecture to provide a structural support for the block-based AES algorithm. Our results show that the IBUS based AES-encrypted design costs less in terms of hardware resource and dynamic energy (60.2%), and achieves higher throughput (x1.6) compared with AXI. Effectively dealing with the automation in design and verification for mixed-signal integrated circuits is a critical problem, particularly when the bus architecture is new. Therefore, we further proposed a configurable and synthesizable IBUS design methodology. The flexible structure, together with bus wrappers, direct memory access (DMA), AES engine, memory controller, several mixed-signal verification intellectual properties (VIPs), and bus performance models (BPMs), forms the basic for integrated circuit design, allowing engineers to integrate application-specific modules and other peripherals to create complex SoCs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Liu, Duo. „Function Verification of Combinational Arithmetic Circuits“. 2015. https://scholarworks.umass.edu/masters_theses_2/235.

Der volle Inhalt der Quelle
Annotation:
Hardware design verification is the most challenging part in overall hardware design process. It is because design size and complexity are growing very fast while the requirement for performance is ever higher. Conventional simulation-based verification method cannot keep up with the rapid increase in the design size, since it is impossible to exhaustively test all input vectors of a complex design. An important part of hardware verification is combinational arithmetic circuit verification. It draws a lot of attention because flattening the design into bit-level, known as the bit-blasting problem, hinders the efficiency of many current formal techniques. The goal of this thesis is to introduce a robust and efficient formal verification method for combinational integer arithmetic circuit based on an in-depth analysis of recent advances in computer algebra. The method proposed here solves the verification problem at bit level, while avoiding bit-blasting problem. It also avoids the expensive Groebner basis computation, typically employed by symbolic computer algebra methods. The proposed method verifies the gate-level implementation of the design by representing the design components (logic gates and arithmetic modules) by polynomials in Z2n . It then transforms the polynomial representing the output bits (called “output signature”) into a unique polynomial in input signals (called “input signature”) using gate-level information of the design. The computed input signature is then compared with the reference input signature (golden model) to determine whether the circuit behaves as anticipated. If the reference input signature is not given, our method can be used to compute (or extract) the arithmetic function of the design by computing its input signature. Additional tools, based on canonical word-level design representations (such as TED or BMD) can be used to determine the function of the computed input signature represents. We demonstrate the applicability of the proposed method to arithmetic circuit verification on a large number of designs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Beckett, Jason. „Forensic computing : a deterministic model for validation and verification through an ontological examination of forensic functions and processes“. 2010. http://arrow.unisa.edu.au:8081/1959.8/93190.

Der volle Inhalt der Quelle
Annotation:
This dissertation contextualises the forensic computing domain in terms of validation of tools and processes. It explores the current state of forensic computing comparing it to the traditional forensic sciences. The research then develops a classification system for the disciplines functions to establish the extensible base for which a validation system is developed.
Thesis (PhD)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Liu, Chien-Nan, und 劉建男. „On Computer-Aided Techniques for Functional Verification of Complex Digital Designs“. Thesis, 2001. http://ndltd.ncl.edu.tw/handle/85693563282339271518.

Der volle Inhalt der Quelle
Annotation:
博士
國立交通大學
電子工程系
89
Due to the increasing design complexity, verification is now the major bottleneck of the entire design process. In order to verify the functionality of the initial register-transfer level (RTL) designs written in hardware description language (HDL), two primary approaches, simulation and formal verification, are often used. However, both techniques encounter some difficulties in dealing with the increasing circuit complexity. The major problem of the simulation-based approaches is the lack of metrics to gauge the quality of the test, which often results in the huge amount of test benches for complex designs. Although formal verification techniques can solve the quality concern in the simulation-based approaches, they are often limited by the computation resources in dealing with large circuits. Therefore, the coverage-driven approach, which combines the ideas of simulation and formal verification, is proposed and rapidly getting popular. Some well-defined functional coverage metrics are used in this approach to perform a quantitative analysis of simulation completeness. With the coverage reports, the verification engineers can focus their efforts on the untested areas and generate more patterns by the help of formal techniques or the designers’ knowledge to achieve better functional coverage. Although 100% coverage still cannot guarantee a 100% error-free design, it provides a more systematic way to measure the completeness of the verification process such that the productivity can be greatly improved. In this dissertation, we study the entire flow of the coverage-driven approach for functional verification problem and try to propose some improvements to handle complex designs. Among the various functional coverage metrics, we choose the most general and complete metric, finite state machine (FSM) coverage, to be the target metric for the entire flow. Because the sizes of the state transition graphs (STGs) for modern designs are often too large to be traversed completely, we propose an improved FSM coverage metric, semantic finite state machine (SFSM) coverage, to reduce the tested STGs to acceptable sizes by using the content of HDL code. In order to deal with the huge state space in modern designs, one possible solution is to use abstraction techniques. In early design stage, most design errors are related to the control part of the design. If we can separate the datapaths from the controllers and verify the control part only, we can effectively reduce the problem size. For this purpose, we propose an automatic controller extractor that can extract FSMs automatically from the HDL descriptions and then select suitable FSMs to be the verified control part. Because we use the relationship between the current states and the next states of a FSM instead of the predefined language constructs for the extraction, there are almost no restrictions on the writing style of HDL codes. No hints or comments in the source codes are needed, either. After the FSMs are extracted from the HDL descriptions, we can easily analyze their FSM coverage and SFSM coverage during simulation. For coverage analysis, we propose a novel approach for functional coverage measurement based on the value change dump (VCD) files produced by the simulators. Because we analyze the functional coverage by post-processing, the usage flow of our approach is much easier and smoother than that of existing instrumentation-based coverage tools. The flexibility in choosing different coverage metrics and measured code regions could be easily increased with competitive performance. For the uncovered state transitions reported in the coverage analysis step, we may have to generate more test bench to traverse those transitions. Therefore, we propose an automatic test bench generation engine that can overcome the memory issues in the symbolic techniques. According to the results of the proposed FSM extraction techniques, we can reasonably partition the HDL designs into some small FSMs. By using the “divide and conquer” strategy for those small FSMs, the peak memory requirement could be significantly reduced to handle large cases. Besides those techniques mentioned above, we also propose an assistant technique to help users reduce the verification time. In manufacturing test, a well-known technique, “design-for-testability” (DFT), is often used to reduce the testing time. Therefore, we applied the similar idea to functional verification and proposed an efficient “design-for-verification” (DFV) technique. By the help of this technique, we can greatly reduce the number of required functional patterns without any loss on the verification quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Miller, Adam Robert. „Development and verification of parameterized digital signal processing macros for microelectronic systems“. 2003. http://etd.utk.edu/2003/MillerAdam.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Tennessee, Knoxville, 2003.
Title from title page screen (viewed Oct. 14, 2003). Thesis advisor: D.W. Bouldin. Document formatted into pages (v, 106 p. : ill. (some col.)). Vita. Includes bibliographical references (p. 49-50).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Costa, Fernando. „Verification of a computer simulator for digital transmission over twisted pairs“. Thesis, 1990. https://hdl.handle.net/10539/24286.

Der volle Inhalt der Quelle
Annotation:
A dissertation submitted to the Faculty of Engineering, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering
This dissertation verifies a Computer Simulation Package for modeling pulse transmission over digital subscriber loops. Multigauge sections on subscriber cables can be studied. The model used for each section incorporates skin, proximity and eddy current effects. The model allows important quantities such as near end echo and overall transmission distortion of pulses to be.predicted. An experimental facility has been established in the laboratory for the purpose of validating the results produced by the simulator with results obtained over real cables. The experimental facility has as far as possible been automated by making use of computer controlled equipment for direct setup or the experiment, data transfer, and analysis. The results obtained from the pulse propagation program and that obtained from measurements are in close. agreement, rendering the Computer Simulation Package useful for analysing the performance of multi gauge digital subscriber loops.
AC 2018
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Lin, Feng-Li, und 林峰立. „SIP Developments and SOC Implementations for Multi-Functional Digital Protection Relays in Power Systems“. Thesis, 2004. http://ndltd.ncl.edu.tw/handle/59827655341204645461.

Der volle Inhalt der Quelle
Annotation:
碩士
長庚大學
電機工程研究所
92
A method of designing a multi-functional digital protective relay chip with the concepts of using SIP cores is proposed. The protective relay includes functions of over-current/under-current relays,over-voltage /under-voltage relays, frequency detection, and an RS232 interface.Comparing with existing microprocessor designs, in general,SoC designs have the following well-known advantages, such as lower cost;lower design complexity; higher reliability, higher operating speed,and better integration. The aim of our work is to define and implement each required SIP core.To adapt various power protective equipments,the corresponding protective relay chip can be efficiently designed by simply updating SIP cores and modifying some variable factors. To further reduce design complexity,the chip has been implemented by using FPGAs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Trenfield, S. J., A. Goyanes, Richard Telford, D. Wilsdon, M. Rowland, S. Gaisford und A. W. Basit. „3D printed drug products: Non-destructive dose verification using a rapid point-and-shoot approach“. 2018. http://hdl.handle.net/10454/16553.

Der volle Inhalt der Quelle
Annotation:
Yes
Three-dimensional printing (3DP) has the potential to cause a paradigm shift in the manufacture of pharmaceuticals, enabling personalised medicines to be produced on-demand. To facilitate integration into healthcare, non-destructive characterisation techniques are required to ensure final product quality. Here, the use of process analytical technologies (PAT), including near infrared spectroscopy (NIR) and Raman confocal microscopy, were evaluated on paracetamol-loaded 3D printed cylindrical tablets composed of an acrylic polymer (Eudragit L100-55). Using a portable NIR spectrometer, a calibration model was developed, which predicted successfully drug concentration across the range of 4–40% w/w. The model demonstrated excellent linearity (R2 = 0.996) and accuracy (RMSEP = 0.63%) and results were confirmed with conventional HPLC analysis. The model maintained high accuracy for tablets of a different geometry (torus shapes), a different formulation type (oral films) and when the polymer was changed from acrylic to cellulosic (hypromellose, HPMC). Raman confocal microscopy showed a homogenous drug distribution, with paracetamol predominantly present in the amorphous form as a solid dispersion. Overall, this article is the first to report the use of a rapid ‘point-and-shoot’ approach as a non-destructive quality control method, supporting the integration of 3DP for medicine production into clinical practice.
Open Access funded by Engineering and Physical Sciences Research Council United Kingdom (EPSRC), UK for their financial support (EP/L01646X).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Krahn, Konrad. „Looking under the hood: unraveling the content, structure, and context of functional requirements for electronic recordkeeping systems“. 2012. http://hdl.handle.net/1993/8105.

Der volle Inhalt der Quelle
Annotation:
Functional requirements for electronic recordkeeping systems have emerged as a principal tool for archival and records management professionals to communicate electronic recordkeeping standards to both records creators and computer systems designers. Various functional or model requirements have been developed by government and international organizations around the world to serve as tools for the design, evaluation, and implementation of recordkeeping systems that will satisfy these recordkeeping requirements. Through their evolution, functional requirements have become complex guiding documents covering an array of recordkeeping systems and preservation interests. Often misunderstood or simply ignored, the recordkeeping requirements at the heart of these specifications are crucial for ensuring the creation, maintenance, and preservation of electronic or digital records over time, for operational, accountability, archival, and historical purposes. This thesis examines the origins and evolution of these functional requirements, particularly through the contributions of the Pittsburgh project’s study of electronic records as evidence and the University of British Columbia project’s study of the preservation of trustworthy electronic records, which together articulated key foundational assumptions about electronic or digital recordkeeping and the structure of many of the functional requirements circulating today. By looking at their conception, development, and evolution, this thesis sheds light on the content, structure, and context of the most widely-used available functional requirements. It evaluates the merits of their often competing assumptions and deliveries, and suggests that none represent a “silver bullet” that addresses the issues associated with electronic records, as each has limitations resting both with the ability of users to implement the requirements and with the rapid and ever-changing landscape of electronic communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lata, Kusum. „Formal Verification Of Analog And Mixed Signal Designs Using Simulation Traces“. Thesis, 2010. http://etd.iisc.ernet.in/handle/2005/1271.

Der volle Inhalt der Quelle
Annotation:
The conventional approach to validate the analog and mixed signal designs utilizes extensive SPICE-level simulations. The main challenge in this approach is to know when all important corner cases have been simulated. An alternate approach is to use the formal verification techniques. Formal verification techniques have gained wide spread popularity in the digital design domain; but in case of analog and mixed signal designs, a large number of test scenarios need to be designed to generate sufficient simulation traces to test out all the specified system behaviours. Analog and mixed signal designs can be formally modeled as hybrid systems and therefore techniques used for formal analysis and verification of hybrid systems can be applied to the analog and mixed signal designs. Generally, formal verification tools for hybrid systems work at the abstract level where we model the systems in terms of differential equations or algebraic equations. However the analog and mixed signal system designers are very comfortable in designing the circuits at the transistor level. To bridge the gap between abstraction level verification and the designs validation which has been implemented at the transistor level, the very important issue we need to address is: Can we formally verify the circuits at the transistor level itself? For this we have proposed a framework for doing the formal verification of analog and mixed signal designs using SPICE simulation traces in one of the hybrid systems formal verification tools (i.e. Checkmate from CMU). An extension to a formal verification approach of hybrid systems is proposed to verify analog and mixed signal (AMS) designs. AMS designs can be formally modeled as hybrid systems and therefore lend themselves to the formal analysis and verification techniques applied to hybrid systems. The proposed approach employs simulation traces obtained from an actual design implementation of AMS circuit blocks (for example, in the form of SPICE netlists) to carry out formal analysis and verification. This enables the same platform used for formally validating an abstract model of an AMS design to be also used for validating its different refinements and design implementation, thereby providing a simple route to formal verification at different levels of implementation. Our approach has been illustrated through the case studies using simulation traces form the different frameworks i.e. Simulink/Stateflow framework and the SPICE simulation traces. We demonstrate the feasibility of our approach around the Checkmate and the case studies for hybrid systems and the analog and mixed signal designs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie