Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Simulation – Hardware – Software.

Dissertationen zum Thema „Simulation – Hardware – Software“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Simulation – Hardware – Software" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lu, Lipin. „Simulation Software and Hardware for Teaching Ultrasound“. Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/143.

Der volle Inhalt der Quelle
Annotation:
Over the years, medical imaging modalities have evolved drastically. Accordingly, the need for conveying the basic imaging knowledge to future specialists and other trainees becomes even more crucial for devoted educators. Understanding the concepts behind each imaging modality requires a plethora of advanced physics, mathematics, mechanics and medical background. Absorbing all of this background information is a daunting task for any beginner. This thesis focuses on developing an ultrasound imaging education tutorial with the goal of easing the process of learning the principles of ultrasound. This tutorial will utilize three diverse approaches including software and hardware applications. By performing these methodologies from different perspectives, not only will the efficiency of the training be enhanced, but also the trainee?s understanding of crucial concepts will be reinforced through repetitive demonstration. The first goal of this thesis was developing an online medical imaging simulation system and deploying it on the website of the University of Miami. In order to construct an easy, understandable, and interactive environment without deteriorating the important aspects of the ultrasound principles, interactive flash animations (developed by Macromedia Director MX) were used to present concepts via graphic-oriented simulations. The second goal was developing a stand-alone MATLAB program, intended to manipulate the intensity of the pixels in the image in order to simulate how ultrasound images are derived. Additionally, a GUI (graphic user interface) was employed to maximize the accessibility of the program and provide easily adjustable parameters. The GUI window enables trainees to see the changes in outcomes by altering different parameters of the simulation. The third goal of this thesis was to incorporating an actual ultrasound demonstration into the tutorial. This was achieved by using a real ultrasound transducer with a pulse/receiver so that trainees could observe actual ultrasound phenomena, and view the results using an oscilloscope. By manually adjusting the panels on the pulse/ receiver console, basic A-mode ultrasound experiments can be performed with ease. By combining software and hardware simulations, the ultrasound education package presented in this thesis will help trainees more efficiently absorb the various concepts behind ultrasound.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Brankovic, Aleksandar. „Performance simulation methodologies for hardware/software co-designed processors“. Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/287978.

Der volle Inhalt der Quelle
Annotation:
Recently the community started looking into Hardware/Software (HW/SW) co-designed processors as potential solutions to move towards the less power consuming and the less complex designs. Unlike other solutions, they reduce the power and the complexity doing so called dynamic binary translation and optimization from a guest ISA to an internal host custom ISA. This thesis tries to answer the question on how to simulate this kind of architectures. For any kind of processor's architecture, the simulation is the common practice, because it is impossible to build several versions of hardware in order to try all alternatives. The simulation of HW/SW co-designed processors has a big issue in comparison with the simulation of traditional HW-only architectures. First of all, open source tools do not exist. Therefore researches many times assume that the software layer overhead, which is in charge for dynamic binary translation and optimization, is constant or ignored. In this thesis we show that such an assumption is not valid and that can lead to very inaccurate results. Therefore including the software layer in the simulation is a must. On the other side, the simulation is very slow in comparison to native execution, so the community spent a big effort on delivering accurate results in a reasonable amount of time. Therefore it is the common practice for HW-only processors that only parts of application stream, which are called samples, are simulated. Samples usually correspond to different phases in the application stream and usually they are no longer than a few million of instructions. In order to archive accurate starting state of each sample, microarchitectural structures are warmed-up for a few million instructions prior to samples instructions. Unfortunately, such a methodology cannot be directly applied for HW/SW co-designed processors. The warm-up for HW/SW co-designed processors needs to be 3-4 orders of magnitude longer than the warm-up needed for traditional HW-only processor, because the warm-up of software layer needs to be longer than the warm-up of hardware structures. To overcome such a problem, in this thesis we propose a novel warm-up technique specialized for HW/SW co-designed processors. Our solution reduces the simulation time by at least 65X with an average error of just 0.75\%. Such a trend is visible for different software and hardware configurations. The process used to determine simulation samples cannot be applied to HW/SW co-designed processors as well, because due to the software layer, samples show more dissimilarities than in the case of HW-only processors. Therefore we propose a novel algorithm that needs 3X less number of samples to achieve similar error like the state of the art algorithms. Again, such a trend is visible for different software and hardware configurations.
Els processadors co-dissenyats Hardware/Software (HW/SW co-designed processors) han estat proposats per l'acadèmia i la indústria com a solucions potencials per a fabricar processadors menys complexos i que consumeixen menys energia. A diferència d'altres alternatives, aquest tipus de processadors redueixen la complexitat i el consum d'energia aplicant traducció y optimització dinàmica de binaris des d'un repertori d'instruccions (instruction set architecture) extern cap a un repertori d'instruccions intern adaptat. Aquesta tesi intenta resoldre els reptes relacionats a la simulació d'aquest tipus d'arquitectures. La simulació és un procés comú en el disseny i desenvolupament de processadors ja que permet explorar diverses alternatives sense haver de fabricar el hardware per a cadascuna d'elles. La simulació de processadors co-dissenyats Hardware/Software és un procés més complex que la simulació de processadores tradicionals, purament hardware. Per exemple, no existeixen eines de simulació disponibles per a la comunitat. Per tant, els investigadors acostumen a assumir que la capa de software, que s'encarrega de la traducció i optimització de les aplicacions, no té un pes específic i, per tant, uns costos computacionals baixos o constants en el millor dels casos. En aquesta tesis demostrem que aquestes premisses són incorrectes i que els resultats amb aquestes acostumen a ser molt imprecisos. Una primera conclusió d'aquesta tesi doncs és que la simulació de la capa software és totalment necessària. A més a més, degut a que els processos de simulació són lents, s'han proposat tècniques de simulació que intenten obtenir resultats precisos en el menor temps possible. Una pràctica habitual és la simulació només de parts de les aplicacions, anomenades mostres, en el disseny de processadors convencionals, purament hardware. Aquestes mostres corresponen a diferents fases de les aplicacions i acostumen a ser de pocs milions d'instruccions. Per tal d'aconseguir un estat microarquitectònic acurat per a cadascuna de les mostres, s'acostumen a estressar aquestes estructures microarquitectòniques del simulador abans de començar a extreure resultats, procés anomenat "escalfament" (warm-up). Desafortunadament, aquesta metodologia no pot ser aplicada a processadors co-dissenyats Hardware/Software. L'"escalfament" de les estructures internes del simulador en el disseny de processadores co-dissenyats Hardware/Software són 3-4 ordres de magnitud més gran que el mateix procés d' "escalfament" en simulacions de processadors convencionals, ja que en els primers cal "escalfar" també les estructures i l'estat de la capa software. En aquesta tesi proposem tècniques de simulació basades en l' "escalfament" de les estructures que redueixen el temps de simulació en 65X amb un error mig del 0,75%. Aquests resultats són extrapolables a diferents configuracions del hardware i de la capa software. Finalment, les tècniques convencionals de selecció de mostres d'aplicacions a simular no són aplicables tampoc a la simulació de processadors co-dissenyats Hardware/Software degut a que les mostres es comporten de manera molt diferent quan es té en compte la capa software. En aquesta tesi, proposem un nou algorisme que redueix 3X el nombre de mostres a simular comparat amb els algorismes tradicionals per a processadors convencionals per a obtenir un error similar. Aquests resultats també són extrapolables a diferents configuracions de hardware i de software. En conclusió, en aquesta tesi es respon al repte de com simular processadors co-dissenyats Hardware/Software, que són una alternativa al disseny tradicional de processadors. Hem demostrat que cal simular la capa software i s'han proposat noves tècniques i algorismes eficients d' "escalfament" i selecció de mostres que són tolerants a diferents configuracions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Blumer, Aric David. „Register Transfer Level Simulation Acceleration via Hardware/Software Process Migration“. Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29380.

Der volle Inhalt der Quelle
Annotation:
The run-time reconfiguration of Field Programmable Gate Arrays (FPGAs) opens new avenues to hardware reuse. Through the use of process migration between hardware and software, an FPGA provides a parallel execution cache. Busy processes can be migrated into hardware-based, parallel processors, and idle processes can be migrated out increasing the utilization of the hardware. The application of hardware/software process migration to the acceleration of Register Transfer Level (RTL) circuit simulation is developed and analyzed. RTL code can exhibit a form of locality of reference such that executing processes tend to be executed again. This property is termed executive temporal locality, and it can be exploited by migration systems to accelerate RTL simulation. In this dissertation, process migration is first formally modeled using Finite State Machines (FSMs). Upon FSMs are built programs, processes, migration realms, and the migration of process state within a realm. From this model, a taxonomy of migration realms is developed. Second, process migration is applied to the RTL simulation of digital circuits. The canonical form of an RTL process is defined, and transformations of HDL code are justified and demonstrated. These transformations allow a simulator to identify basic active units within the simulation and combine them to balance the load across a set of processors. Through the use of input monitors, executive locality of reference is identified and demonstrated on a set of six RTL designs. Finally, the implementation of a migration system is described which utilizes Virtual Machines (VMs) and Real Machines (RMs) in existing FPGAs. Empirical and algorithmic models are developed from the data collected from the implementation to evaluate the effect of optimizations and migration algorithms.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yildirim, Gokce. „Smoke Simulation On Programmable Graphics Hardware“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606545/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Fluids such as smoke, water and fire are simulated for both Computer Graphics applications and engineering fields such as Mechanical Engineering. Generally, Fluid Dynamics is used for the achievement of realistic-looking fluid simulations. However, the complexity of these calculations makes it difficult to achieve high performance. With the advances in graphics hardware, it has been possible to provide programmability both at the vertex and the fragment level, which allows for faster simulations of complex fluids and other events. In this thesis, one gaseous fluid, smoke is simulated in three dimensions by solving Navier-Stokes Equations (NSEs) using a semi-Lagrangian unconditionally stable method. Simulation is performed both on Central Processing Unit (CPU) and Graphics Processing Unit (GPU). For the programmability at the vertex and the fragment level, C for Graphics (Cg), a platform-independent and architecture neutralshading language, is used. Owing to the advantage of programmability and parallelism of GPU, smoke simulation on graphics hardware runs significantly faster than the corresponding CPU implementation. The test results prove the higher performance of GPU over CPU for running three dimensional fluid simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Freitas, Arthur. „Hardware/Software Co-Verification Using the SystemVerilog DPI“. Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700941.

Der volle Inhalt der Quelle
Annotation:
During the design and verification of the Hyperstone S5 flash memory controller, we developed a highly effective way to use the SystemVerilog direct programming interface (DPI) to integrate an instruction set simulator (ISS) and a software debugger in logic simulation. The processor simulation was performed by the ISS, while all other hardware components were simulated in the logic simulator. The ISS integration allowed us to filter many of the bus accesses out of the logic simulation, accelerating runtime drastically. The software debugger integration freed both hardware and software engineers to work in their chosen development environments. Other benefits of this approach include testing and integrating code earlier in the design cycle and more easily reproducing, in simulation, problems found in FPGA prototypes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wells, George James. „Hardware emulation and real-time simulation strategies for the concurrent development of microsatellite hardware and software“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62899.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Liu, Tsun-Ho. „Future hardware realization of self-organizing learning array and its software simulation“. Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174680878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Herfs, Werner Josef. „Modellbasierte Software in the Loop Simulation von Werkzeugmaschinen /“. Aachen : Apprimus-Verl, 2010. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=018939251&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bergström, Christoffer. „Simulation Framework of embedded systems in armored vehicle design“. Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185123.

Der volle Inhalt der Quelle
Annotation:
Embedded systems are a mixture of electric and mechanical hardware along with the software that is controlling them. BAE Systems Hägglunds, which designs and builds armored vehicles, is interested in knowing how to simulate these systems for logic validation and testing different design variations.  The goal of this thesis was to create a framework for carrying out these simulations. This was done by analyzing hardware and software design at BAE and Identifying the necessary conditions for creating a model which can be simulated.  Matlab Simulink is suggested as the tool for these simulations. The framework suggests dividing the model into smaller modules which reflects design principles at BAE. These modules will be made up of sub-modules containing hardware and software in layers. The hardware foundation will be made up of pre-designed components created in Simulink’s physical simulation library. The software will be imported into specialized sub-modules and integrated into the hardware using proposed bridge functions, converting information between the two systems. The framework is designed to provide a comprehensive solution instead of a deep one that can be adapted to changing circumstances. Tests have been made on small-scale systems, but the framework still needs to be tested on a large-scale system, which was not possible during this thesis. In conclusion, this is a stable foundation that needs to be built upon.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Tang, Yi. „SUNSHINE: Integrate TOSSIM and P-Sim“. Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/40721.

Der volle Inhalt der Quelle
Annotation:
Simulators are important tools for wireless sensor network (sensornet) design and evaluation. However, existing simulators only support evaluations of protocols and software aspects of sensornet design. Thus they cannot accurately capture the significant impacts of various hardware designs on sensornet performance. To fill in the gap, we proposed SUNSHINE, a scalable hardware-software cross-domain simulator for sensornet applications. SUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware through the integration of two modules: a network simulator TOSSIM [1] and hardware-software simulator P-Sim composed of an instruction-set simulator SimulAVR [2] and a hardware simulator GEZEL [3]. This thesis focuses on the integration of TOSSIM and P-Sim. It discusses the integration design considerations and explains how to address several integration challenges: time conversion, data conversion, and time synchronization. Some experiments are also given to demonstrate SUNSHINEâ s cross-domain simulation capability, showing SUNSHINE's strength by integrating simulators from different domains.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Underwood, Ryan C. „An open framework for highly concurrent hardware-in-the-loop simulation“. Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/Underwood_09007dcc8042c7c7.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed February 14, 2008) Includes bibliographical references (p. 37-40).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zhang, Jingyao. „SUNSHINE: A Multi-Domain Sensor Network Simulator“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/45146.

Der volle Inhalt der Quelle
Annotation:
Simulators are important tools for analyzing and evaluating different design options for wireless sensor networks (sensornets) and hence, have been intensively studied in the past decades. However, existing simulators only support evaluations of protocols and software aspects of sensornet design. They cannot accurately capture the significant impacts of various hardware designs on sensornet performance. As a result, the performance/energy benefits of customized hardware designs are difficult to be evaluated in sensornet research. To fill in this technical void, in this thesis, we describe the design and implementation of SUNSHINE, a scalable hardware-software cross-domain simulator for sensornet applications. SUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware up to cycle-level accuracy through its seamless integration of three existing sensornet simulators: a network simulator TOSSIM, an instruction-set simulator SimulAVR and a hardware simulator GEZEL. SUNSHINE solves challenging design problems, including data exchanges and time synchronizations across different simulation domains and simulation accuracy levels. SUNSHINE also provides hardware specification scheme for simulating flexible and customized hardware designs. Several experiments are given to illustrate SUNSHINEâ s cross-domain simulation capability, demonstrating that SUNSHINE is an efficient tool for software-hardware codesign in sensornet research.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Rafeeq, Akhil Ahmed. „A Development Platform to Evaluate UAV Runtime Verification Through Hardware-in-the-loop Simulation“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99041.

Der volle Inhalt der Quelle
Annotation:
The popularity and demand for safe autonomous vehicles are on the rise. Advances in semiconductor technology have led to the integration of a wide range of sensors with high-performance computers, all onboard the autonomous vehicles. The complexity of the software controlling the vehicles has also seen steady growth in recent years. Verifying the control software using traditional verification techniques is difficult and thus increases their safety concerns. Runtime verification is an efficient technique to ensure the autonomous vehicle's actions are limited to a set of acceptable behaviors that are deemed safe. The acceptable behaviors are formally described in linear temporal logic (LTL) specifications. The sensor data is actively monitored to verify its adherence to the LTL specifications using monitors. Corrective action is taken if a violation of a specification is found. An unmanned aerial vehicle (UAV) development platform is proposed for the validation of monitors on configurable hardware. A high-fidelity simulator is used to emulate the UAV and the virtual environment, thereby eliminating the need for a real UAV. The platform interfaces the emulated UAV with monitors implemented on configurable hardware and autopilot software running on a flight controller. The proposed platform allows the implementation of monitors in an isolated and scalable manner. Scenarios violating the LTL specifications can be generated in the simulator to validate the functioning of the monitors.
Master of Science
Safety is one of the most crucial factors considered when designing an autonomous vehicle. Modern vehicles that use a machine learning-based control algorithm can have unpredictable behavior in real-world scenarios that were not anticipated while training the algorithm. Verifying the underlying software code with all possible scenarios is a difficult task. Runtime verification is an efficient solution where a relatively simple set of monitors validate the decisions made by the sophisticated control software against a set of predefined rules. If the monitors detect an erroneous behavior, they initiate a predetermined corrective action. Unmanned aerial vehicles (UAVs), like drones, are a class of autonomous vehicles that use complex software to control their flight. This thesis proposes a platform that allows the development and validation of monitors for UAVs using configurable hardware. The UAV is emulated on a high-fidelity simulator, thereby eliminating the time-consuming process of flying and validating monitors on a real UAV. The platform supports the implementation of multiple monitors that can execute in parallel. Scenarios to violate rules and cause the monitors to trigger corrective actions can easily be generated on the simulator.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Silva, Junior José Cláudio Vieira e. „Verificação de Projetos de Sistemas Embarcados através de Cossimulação Hardware/Software“. Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7856.

Der volle Inhalt der Quelle
Annotation:
Submitted by Viviane Lima da Cunha (viviane@biblioteca.ufpb.br) on 2016-02-16T14:54:49Z No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5)
Made available in DSpace on 2016-02-16T14:54:49Z (GMT). No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5) Previous issue date: 2015-08-17
Este trabalho propõe um ambiente para verificação de sistemas embarcados heterogêneos através da cossimulação distribuída. A verificação ocorre de maneira síncrona entre o software do sistema e o sistema embarcado usando a High Level Architecture (HLA) como middeware. A novidade desta abordagem não é apenas fornecer suporte para simulações, mas também permitir a integração sincronizada com todos os dispositivos de hardware físico. Neste trabalho foi utilizado o Ptolemy como uma plataforma de simulação. A integração do HLA com Ptolemy e os modelos de hardware abre um vasto conjunto de aplicações, como o de teste de vários dispositivos ao mesmo tempo, executando os mesmos, ou diferentes aplicativos ou módulos, a execução de multiplos dispositivos embarcados para a melhoria de performance. Além disso a abordagem de utilização do HLA, permite que sejam interligados ao ambiente, qualquer tipo de robô, assim como qualquer outro simulador diferente do Ptolemy. Estudo de casos são apresentado para provar o conceito, mostrando a integração bem sucedida entre o Ptolemy e o HLA e a verificação de sistemas utilizando Hardware-in-the-loop e Robot-in-the-loop.
This work proposes an environment for verification of heterogeneous embedded systems through distributed co-simulation. The verification occurs in real-time co-simulating the system software and hardware platform using the High Level Architecture (HLA) as a middleware. The novelty of this approach is not only providing support for simulations, but also allowing the synchronous integration with any physical hardware devices. In this work we use the Ptolemy framework as a simulation platform. The integration of HLA with Ptolemy and the hardware models open a vast set of applications, like the test of many devices at the same time, running the same, or different applications or modules, the usage of Ptolemy for real-time control of embedded systems and the distributed execution of different embedded devices for performance improvement. Furthermore the use of HLA approach allows them to be connected to the environment, any type of robot, as well as any other Ptolemy simulations. Case studies are presented to prove the concept, showing the successful integration between Ptolemy and the HLA and verification systems using Hardware-in-the-loop and Robot-in-the-loop.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Dočekal, Martin. „HIL simulace manipulátorů nebo stroje“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444291.

Der volle Inhalt der Quelle
Annotation:
The diploma thesis deals with HIL simulation (hardware in the loop). The thesis contains a manipulator created in the virtual software V-REP. The connection of real inputs and virtual outputs of the machine is realized by the microcontroller Arduino UNO. The first task deals with the control of the manipulator using the joystick PS2. The second task is a separate control of the robot using an microcontroller Arduino UNO. The resulting connection can be modified in the furher and the interface modified. The work will be used for educational purposes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Peters, Eduardo. „Coprocessador para aceleração de aplicações desenvolvidas utilizando paradigma orientado a notificações“. Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/325.

Der volle Inhalt der Quelle
Annotation:
Este trabalho apresenta um novo hardware coprocessador para acelerar aplicações desenvolvidas utilizando-se o Paradigma Orientado a Notificações (PON), cuja essência se constitui em uma nova forma de influência causal baseada na colaboração pontual entre entidades granulares e notificantes. Uma aplicação PON apresenta as vantagens da programação baseada em eventos e da programação declarativa, possibilitando um desenvolvimento de alto nível, auxiliando o reuso de código e reduzindo o processamento desnecessário existente das aplicações desenvolvidas com os paradigmas atuais. Como uma aplicação PON é composta de uma cadeia de pequenas entidades computacionais, comunicando-se somente quando necessário, é um bom candidato a implementação direta em hardware. Para investigar este pressuposto, criou-se um coprocessador capaz de executar aplicações PON existentes. O coprocessador foi desenvolvido utilizando-se linguagem VHDL e testado em FPGAs, mostrando um decréscimo de 96% do número de ciclos de clock utilizados por um programa se comparado a implementação puramente em software da mesma aplicação, considerando uma dada materialização em um framework em PON.
This work presents a new hardware coprocessor to accelerate applications developed using the Notification-Oriented Paradigm (NOP). A NOP application has the advantages of both event-based programming and declarative programming, enabling higher level software development, improving code reuse, and reducing the number of unnecessary computations. Because a NOP application is composed of a network of small computational entities communicating only when needed, it is a good candidate for a direct hardware implementation. In order to investigate this assumption, a coprocessor that is able to run existing NOP applications was created. The coprocessor was developed in VHDL and tested in FPGAs, providing a decrease of 96% in the number of clock cycles compared to a purely software implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

González, Cortés Carlos Eduardo. „Diseño e implementación del software de vuelo para un nano-satélite tipo Cubesat“. Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/115307.

Der volle Inhalt der Quelle
Annotation:
Ingeniero Civil Eléctrico
El estándar de nanosatélites Cubesat fue pensado para facilitar el desarrollo de pequeños proyectos espaciales con fines científicos y educacionales, a un bajo costo y en cortos periodos de tiempo. Siguiendo esta línea, la Facultad de Ciencias Físicas y Matemáticas de la Uni- versidad de Chile ha impulsado el proyecto SUCHAI, que consiste en implementar, poner en órbita y operar el primer satélite desarrollado por una universidad del país. El computador a bordo de la aeronave, que consiste un sistema embebido de limitada capacidad de cómputo, escasa memoria y bajo consumo de energía, debe ejecutar el software de vuelo que controlará sus operaciones una vez en órbita. El objetivo de este trabajo es el diseño e implementación de este software para el satélite SUCHAI, como una solución confiable, flexible y extensible que sea la base para futuras misiones aeroespaciales. El diseño del software consiste en una estructura de tres capas, que consigue dividir el problema convenientemente. La de más bajo nivel considera los controladores de hardware, la capa intermedia alberga al sistema operativo, y la de nivel superior, contiene los detalles de la aplicación requerida específicamente para este sistema. Para la arquitectura de la capa de aplicación, se estudia y aplica el concepto de patrón de diseño, en específico, se realiza una adaptación de command pattern. De esta manera, el satélite se concibe como un ejecutor de comandos genéricos y se obtiene una solución mantenible, modificable y extensible en el tiempo, mediante la programación de los comandos concretos que sean requeridos. La implementación se realiza sobre un PIC24F y considera controladores para los periféricos I2C, RS232 y SPI, así como para los subsistemas de radiocomunicaciones y energía. Se decide utilizar el sistema operativo FreeRTOS, como capa intermedia, lo que permite contar con el procesamiento concurrente de tareas, herramientas de temporización y sincronización. Se ha puesto especial énfasis en la implementación de la arquitectura planteada para la capa de aplicación, consiguiendo un software capaz de ejecutar una serie de comandos, programados para cumplir los requerimientos operacionales del proyecto, lo cual representa el método principal para extender sus funcionalidades y adecuarse a futuras misiones. Para probar y verificar el sistema desarrollado, se ha utilizado la técnica denominada hardware on the loop simulation. Se han obteniendo datos de funcionamiento, bajo condiciones de operación hipotéticas, a través del registro generado por la consola serial. Con esto se verifican los requerimientos operacionales de la misión, con resultados exitosos, obteniendo el sistema base y funcional del satélite. Como trabajo futuro, se utilizará este software para integrar el resto de los sistemas del satélite SUCHAI, demostrando su capacidad de adaptación y extensión, en un paso previo a la prueba final: funcionar adecuadamente en el espacio exterior.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Deicke, Markus. „Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230123.

Der volle Inhalt der Quelle
Annotation:
Der stetig steigende Funktionsumfang im Automobil und die zunehmende Vernetzung von Steuergeräten erfordern neue Methoden zur Beherrschung der Komplexität in der Validierung und Verifikation. Die virtuelle Absicherung ermöglicht die Integration der Software in einem PC-System, unabhängig von der Ziel-Hardware, zur frühzeitigen Gewährleistung der Softwarequalität im Entwicklungsprozess. Ebenso kann die Wiederverwendbarkeit vorhandener Komponenten in zukünftigen Mikrocontrollern sichergestellt werden. Die Grundlage dafür liefert der AUTOSAR-Standard durch einheitliche Schnittstellenbeschreibungen, welche die Abstraktion von Hardware und Software ermöglichen. Allerdings enthält der Standard hardwareabhängige Software-Komponenten, die als Complex-Device-Drivers (CDDs) bezeichnet werden. Aufgrund ihrer Hardwareabhängigkeit sind CDDs nicht direkt in eine virtuelle Absicherungsplattform integrierbar, da die spezifischen Hardware-Module nicht verfügbar sind. Die Treiber sind dennoch Teil der Steuergeräte-Software und somit bei einem ganzheitlichen Absicherungsansatz mit zu betrachten. Diese Dissertation beschreibt sieben unterschiedliche Konzepte zur Berücksichtigung von CDDs in der virtuellen Absicherung. Aus der Evaluierung der Praxistauglichkeit aller Ansätze wird eine Auswahlmethodik für die optimale Lösung bei sämtlichen Anwendungsfällen von CDDs in der Steuergeräte-Software entwickelt. Daraus abgeleitet, eignen sich zwei der Konzepte für die häufigsten Anwendungsfälle, die im Weiteren detailliert beschrieben und realisiert werden. Das erste Konzept erlaubt die vollständige Simulation eines CDD. Dies ist notwendig, um die Integration der Funktions-Software selbst ohne den Treiber zu ermöglichen und alle Schnittstellen abzusichern, auch wenn der CDD noch nicht verfügbar ist. Durch eine vollständige Automatisierung ist die Erstellung der Simulation nur mit geringem Arbeitsaufwand verbunden. Das zweite Konzept ermöglicht die vollständige Integration eines CDD, wobei die Hardware-Schnittstellen über einen zusätzlichen Hardware-Abstraction-Layer an die verfügbare Hardware des Systems zur virtuellen Absicherung angebunden werden. So ist der Treiber in der Lage, reale Hardware-Komponenten anzusteuern und kann funktional abgesichert werden. Eine flexible Konfiguration der Abstraktionsschicht erlaubt den Einsatz für eine große Bandbreite von CDDs. Im Rahmen der Arbeit werden beide Konzepte anhand von industrierelevanten Projekten aus der Serienentwicklung erprobt und detailliert evaluiert
The constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Cemin, Paulo Roberto. „Plataforma de medição de consumo para comparação entre software e hardware em projetos energeticamente eficientes“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1310.

Der volle Inhalt der Quelle
Annotation:
A popularização dos dispositivos móveis impulsionou a pesquisa e o desenvolvimento de soluções de baixo consumo. A evolução destas aplicações demanda ferramentas que permitam avaliar diferentes alternativas de implementação, fornecendo, aos desenvolvedores, informações valiosas para a criação de soluções energeticamente eficientes. Este trabalho desenvolveu uma nova plataforma de medição de consumo que permite comparar a eficiência energética de diferentes algoritmos implementados em software e em hardware. A plataforma é capaz de medir o consumo energético de um processo específico em execução em um processador de propósito geral com um sistema operacional padrão, além de comparar o resultado obtido com algoritmos equivalentes implementados em uma FPGA. Isto permite ao desenvolvedor dividir o processamento da aplicação entre software e hardware de forma a obter a solução mais energeticamente eficiente. Comparada com o estado da arte, a plataforma de medição criada possui três característica inovadoras: suporte a medição de consumo de software e hardware; medição de trechos de código específicos executados pelo processador; e suporte a alteração dinâmica do clock. Também é mostrado neste trabalho como a plataforma desenvolvida tem sido utilizada para analisar o consumo energético de algoritmos de detecção de intrusão de rede para ataques do tipo probing.
The large number of mobile devices increased the interest in low-power designs. Tools that allow the evaluation of alternative implementations give the designer actionable information to create energy-efficient designs. This paper presents a new power measurement platform able to compare the energy consumption of different algorithms implemented in software and in hardware. The proposed platform is able to measure the energy consumption of a specific process running in a general-purpose CPU with a standard operating system, and to compare the results with equivalent algorithms running in an FPGA. This allows the designer to choose the most energy-efficient software vs. hardware partitioning for a given application. Compared with the current state-of-the-art, the presented platform has four distinguishing features: (i) support for both software and hardware power measurements, (ii) measurement of individual code sections in the CPU, (iii) support for dynamic clock frequencies, and (iv) improvement of measurement precision. We also demonstrate how the developed platform has been used to analyze the energy consumption of network intrusion detection algorithms aimed at detecting probing attacks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Rudraiah, Dakshinamurthy Amruth. „A Compiler-based Framework for Automatic Extraction of Program Skeletons for Exascale Hardware/Software Co-design“. Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5695.

Der volle Inhalt der Quelle
Annotation:
The design of high-performance computing architectures requires performance analysis of large-scale parallel applications to derive various parameters concerning hardware design and software development. The process of performance analysis and benchmarking an application can be done in several ways with varying degrees of fidelity. One of the most cost-effective ways is to do a coarse-grained study of large-scale parallel applications through the use of program skeletons. The concept of a "program skeleton" that we discuss in this paper is an abstracted program that is derived from a larger program where source code that is determined to be irrelevant is removed for the purposes of the skeleton. In this work, we develop a semi-automatic approach for extracting program skeletons based on compiler program analysis. We demonstrate correctness of our skeleton extraction process by comparing details from communication traces, as well as show the performance speedup of using skeletons by running simulations in the SST/macro simulator. Extracting such a program skeleton from a large-scale parallel program requires a substantial amount of manual effort and often introduces human errors. We outline a semi-automatic approach for extracting program skeletons from large-scale parallel applications that reduces cost and eliminates errors inherent in manual approaches. Our skeleton generation approach is based on the use of the extensible and open-source ROSE compiler infrastructure that allows us to perform flow and dependency analysis on larger programs in order to determine what code can be removed from the program to generate a skeleton.
M.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Ashby, Ryan Michael. „Hardware in the Loop Simulation of a Heavy Truck Braking System and Vehicle Control System Design“. The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366046155.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Deicke, Markus. „Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten“. Universitätsverlag Chemnitz, 2016. https://monarch.qucosa.de/id/qucosa%3A20810.

Der volle Inhalt der Quelle
Annotation:
Der stetig steigende Funktionsumfang im Automobil und die zunehmende Vernetzung von Steuergeräten erfordern neue Methoden zur Beherrschung der Komplexität in der Validierung und Verifikation. Die virtuelle Absicherung ermöglicht die Integration der Software in einem PC-System, unabhängig von der Ziel-Hardware, zur frühzeitigen Gewährleistung der Softwarequalität im Entwicklungsprozess. Ebenso kann die Wiederverwendbarkeit vorhandener Komponenten in zukünftigen Mikrocontrollern sichergestellt werden. Die Grundlage dafür liefert der AUTOSAR-Standard durch einheitliche Schnittstellenbeschreibungen, welche die Abstraktion von Hardware und Software ermöglichen. Allerdings enthält der Standard hardwareabhängige Software-Komponenten, die als Complex-Device-Drivers (CDDs) bezeichnet werden. Aufgrund ihrer Hardwareabhängigkeit sind CDDs nicht direkt in eine virtuelle Absicherungsplattform integrierbar, da die spezifischen Hardware-Module nicht verfügbar sind. Die Treiber sind dennoch Teil der Steuergeräte-Software und somit bei einem ganzheitlichen Absicherungsansatz mit zu betrachten. Diese Dissertation beschreibt sieben unterschiedliche Konzepte zur Berücksichtigung von CDDs in der virtuellen Absicherung. Aus der Evaluierung der Praxistauglichkeit aller Ansätze wird eine Auswahlmethodik für die optimale Lösung bei sämtlichen Anwendungsfällen von CDDs in der Steuergeräte-Software entwickelt. Daraus abgeleitet, eignen sich zwei der Konzepte für die häufigsten Anwendungsfälle, die im Weiteren detailliert beschrieben und realisiert werden. Das erste Konzept erlaubt die vollständige Simulation eines CDD. Dies ist notwendig, um die Integration der Funktions-Software selbst ohne den Treiber zu ermöglichen und alle Schnittstellen abzusichern, auch wenn der CDD noch nicht verfügbar ist. Durch eine vollständige Automatisierung ist die Erstellung der Simulation nur mit geringem Arbeitsaufwand verbunden. Das zweite Konzept ermöglicht die vollständige Integration eines CDD, wobei die Hardware-Schnittstellen über einen zusätzlichen Hardware-Abstraction-Layer an die verfügbare Hardware des Systems zur virtuellen Absicherung angebunden werden. So ist der Treiber in der Lage, reale Hardware-Komponenten anzusteuern und kann funktional abgesichert werden. Eine flexible Konfiguration der Abstraktionsschicht erlaubt den Einsatz für eine große Bandbreite von CDDs. Im Rahmen der Arbeit werden beide Konzepte anhand von industrierelevanten Projekten aus der Serienentwicklung erprobt und detailliert evaluiert.
The constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Tuncali, Cumhur Erkan. „Implementation And Simulation Of Mc68hc11 Microcontroller Unit Using Systemc For Co-design Studies“. Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609177/index.pdf.

Der volle Inhalt der Quelle
Annotation:
In this thesis, co-design and co-verification of a microcontroller hardware and software using SystemC is studied. For this purpose, an MC68HC11 microcontroller unit, a test bench that contains input and output modules for the verification of microcontroller unit are implemented using SystemC programming language and a visual simulation program is developed using C# programming language in Microsoft .NET platform. SystemC is a C++ class library that is used for co-designing hardware and software of a system. One of the advantages of using SystemC in system design is the ability to design each module of the system in different abstraction levels. In this thesis, test bench modules are designed in a high abstraction level and microcontroller hardware modules are designed in a lower abstraction level. At the end, a simulation platform that is used for co-simulation and co-verification of hardware and software modules of overall system is developed by combining microcontroller implementation, test bench modules, test software and visual simulation program. Simulations at different levels are performed on the system in the developed simulation platform. Simulation results helped observing errors in designed modules easily and making corrections until all results verified designed hardware modules. This stuation showed that co-designing and co-verifying hardware and software of a system helps finding errors and making corrections in early stages of system design cycle and so reducing design time of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Pieper, Tobias [Verfasser], und Roman [Gutachter] Obermaisser. „Distributed co-simulation framework for hardware- and software-in-the-loop testing of networked embedded real-time systems / Tobias Pieper ; Gutachter: Roman Obermaisser“. Siegen : Universitätsbibliothek der Universität Siegen, 2020. http://d-nb.info/1220506214/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Oselame, Gleidson Brandão. „Desenvolvimento de software e hardware para diagnóstico e acompanhamento de lesões dermatológicas suspeitas para câncer de pele“. Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/973.

Der volle Inhalt der Quelle
Annotation:
O câncer é responsável por cerca de 7 milhões de óbitos anuais em todo o mundo. Estima-se que 25% de todos os cânceres são de pele, sendo no Brasil o tipo mais incidente em todas as regiões geográficas. Entre eles, o tipo melanoma, responsável por 4% dos cânceres de pele, cuja incidência dobrou mundialmente nos últimos dez anos. Entre os métodos diagnósticos empregados, cita-se a regra ABCD, que leva em consideração assimetria (A), bordas (B), cor (C) e diâmetro (D) de manchas ou nevos. O processamento digital de imagens tem mostrado um bom potencial para auxiliar no diagnóstico precoce de melanomas. Neste sentido, o objetivo do presente estudo foi desenvolver um software, na plataforma MATLAB®, associado a um hardware para padronizar a aquisição de imagens, visando realizar o diagnóstico e acompanhamento de lesões cutâneas suspeitas de malignidade (melanoma). Utilizou-se como norteador a regra ABCD para o desenvolvimento de métodos de análise computacional. Empregou-se o MATLAB como ambiente de programação para o desenvolvimento de um software para o processamento digital de imagens. As imagens utilizadas foram adquiridas de dois bancos de imagens de acesso livre. Foram inclusas imagens de melanomas (n=15) e imagens nevos (não câncer) (n=15). Utilizaram-se imagens no canal de cor RGB, as quais foram convertidas para escala de cinza, aplicação de filtro de mediana 8x8 e técnica de aproximação por vizinhança 3x3. Após, procedeu-se a binarização e inversão de preto e branco para posterior extração das características do contorno da lesão. Para a aquisição padronizada de imagens foi desenvolvido um protótipo de hardware, o qual não foi empregado neste estudo (que utilizou imagens com diagnóstico fechado, de bancos de imagem), mas foi validado para a avaliação do diâmetro das lesões (D). Utilizou-se a estatística descritiva onde os grupos foram submetidos ao teste não paramétrico para duas amostras independentes de Mann-Whitney U. Ainda, para avaliar a sensibilidade (SE) e especificidade (SP) de cada variável, empregou-se a curva ROC. O classificador utilizado foi uma rede neural artificial de base radial, obtendo acerto diagnóstico para as imagens melanomas de 100% e para imagens não câncer de 90,9%. Desta forma, o acerto global para predição diagnóstica foi de 95,5%. Em relação a SE e SP do método proposto, obteve uma área sob a curva ROC de 0,967, o que sugere uma excelente capacidade de predição diagnóstica, sobretudo, com baixo custo de utilização, visto que o software pode ser executado na grande maioria dos sistemas operacionais hoje utilizados.
Cancer is responsible for about 7 million deaths annually worldwide. It is estimated that 25% of all cancers are skin, and in Brazil the most frequent in all geographic regions type. Among them, the melanoma type, accounting for 4% of skin cancers, whose incidence has doubled worldwide in the past decade. Among the diagnostic methods employed, it is cited ABCD rule which considers asymmetry (A), edges (B), color (C) and diameter (D) stains or nevi. The digital image processing has shown good potential to aid in early diagnosis of melanoma. In this sense, the objective of this study was to develop software in MATLAB® platform, associated with hardware to standardize image acquisition aiming at performing the diagnosis and monitoring of suspected malignancy (melanoma) skin lesions. Was used as the ABCD rule for guiding the development of methods of computational analysis. We used MATLAB as a programming environment for the development of software for digital image processing. The images used were acquired two banks pictures free access. Images of melanomas (n = 15) and pictures nevi (not cancer) (n = 15) were included. We used the image in RGB color channel, which were converted to grayscale, application of 8x8 median filter and approximation technique for 3x3 neighborhood. After we preceded binarization and reversing black and white for subsequent feature extraction contours of the lesion. For the standardized image acquisition was developed a prototype hardware, which was not used in this study (that used with enclosed diagnostic images of image banks), but has been validated for evaluation of lesion diameter (D). We used descriptive statistics where the groups were subjected to non-parametric test for two independent samples Mann-Whitney U test yet, to evaluate the sensitivity (SE) and specificity (SP) of each variable, we used the ROC curve. The classifier used was an artificial neural network with radial basis function, obtaining diagnostic accuracy for melanoma images and 100% for images not cancer of 90.9%. Thus, the overall diagnostic accuracy for prediction was 95.5%. Regarding the SE and SP of the proposed method, obtained an area under the ROC curve of 0.967, which suggests an excellent diagnostic ability to predict, especially with low costs, since the software can be run in most systems operational use today.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Zvonček, Radovan. „Knihovna procesorů pro návrh vestavěných systémů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412853.

Der volle Inhalt der Quelle
Annotation:
This work deals with designing a library of processor models used in embedded systems. Processor architectures are described using the ISAC language. The ISAC language is one of several outcomes of the Lissom project that is taking place at the Faculty of Information Technology, BUT, Brno. The beginning of this work is aimed to provide the introduction to processor architectures used in today's embedded systems. Remaining sections are devoted to presentations of exemplary processor architectures and the description of their implementation. This work is finalized by concluding the gathered experience with emphasis on the suitability of the ISAC language for architecture description and the efficiency of its simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kronbauer, Fernando André. „Memorias transacionais : prototipagem e simulação de implementações em hardware e uma caracterização para o problema de gerenciamento de contenção em software“. [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276161.

Der volle Inhalt der Quelle
Annotation:
Orientador: Sandro Rigo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-13T10:38:16Z (GMT). No. of bitstreams: 1 Kronbauer_FernandoAndre_M.pdf: 3637569 bytes, checksum: 4c5752e2ae7f853d3b5f4971d6d7cbab (MD5) Previous issue date: 2009
Resumo: Enquanto que arquiteturas paralelas vão se tornando cada vez mais comuns na indústria de computação mais e mais programadores precisam escrever programas paralelos e desta forma são expostos aos problemas relacionados ao uso dos mecanismos tradicionais de controle de concorrência. Memórias transacionais têm sido propostas como um meio de aliviar as dificuldades encontradas ao escreverem-se programas paralelos: o desenvolvedor precisa apenas marcar as seções de código que devem ser executadas de forma atômica e isolada - na forma de transações, e o sistema cuida dos detalhes de sincronização. Neste traba­lho exploramos propostas de memórias transacionais com suporte específico em hardware (HTM), desenvolvendo uma plataforma flexível para a prototipagem, simulação e carac­terização destes sistemas. Também exploramos um sistema de memória transacional com suporte apenas em software (STM), apresentando uma abordagem nova para gerenciar a contenção entre transações. Esta abordagem leva em consideração os padrões de acesso aos diferentes dados de um programa ao escolher o gerenciador de contenção a ser usado para o acesso a estes dados. Elaboramos uma modificação da plataforma de STM que nos permite realizar esta associação entre dados e gerenciamento de contenção, e a partir desta implementação realizamos uma caracterização baseada nos padrões de acesso aos dados de um programa executando em diferentes sistemas de computação. Os resultados de nosso trabalho mostram a viabilidade do uso de memórias transacionais em um ambi­ente de pesquisa acadêmica, e apontam caminhos para a realização de trabalhos futuros que aumentem a viabilidade do seu uso também pela indústria.
Abstract: As parallel architectures become prevalent in the computer industry, more and more programmers are required to write parallel programs and are thus being exposed to the problems related to the use of traditional mechanisms for concurrency control. Transactional memory has been devised as a means for easing the burden of writing parallel Programs: the programmer has only to mark the sections of code that are to be executed in an atomic and isolated way - in the form of transactions, and the system takes care of the synchronization details. In this work we explore different proposals of transactional memories based on specific hardware support (HTM), developing a flexible platform for the prototyping, simulation and characterization of these systems. We also explore a transactional memory system based solely on software support (STM), devising a novel approach for managing the contention among transactions. This new approach takes into account access patterns to different data in an application when choosing the contention management strategy to be used for the access to these data. We made modifications to the STM system in order to enable the association of the data with the contention manager algorithm, and using the new implementation we characterized the STM system based on the access patterns to the data of a program, running it on different hardware. Our results show the viability of the use of transactional memories in an academic environment, and serve as a basis for the proposal of different directions to be followed in future research work, aimed at leveraging the use of transactional memories by the industry.
Mestrado
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Silva, Hilgad Montelo da. „Simulação com hardware in the loop aplicada a veículos submarinos semi-autônomos“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-09022009-164239/.

Der volle Inhalt der Quelle
Annotation:
Veículos Submarinos Não Tripulados (UUVs Unmanned Underwater Vehicles) possuem muitas aplicações comerciais, militares e científicas devido ao seu elevado potencial e relação custo-desempenho considerável quando comparados a meios tradicionais utilizados para a obtenção de informações provenientes do meio subaquático. O desenvolvimento de uma plataforma de testes e amostragem confiável para estes veículos requer o projeto de um sistema completo além de exigir diversos e custosos experimentos realizados no mar para que as especificações possam ser devidamente validadas. Modelagem e simulação apresentam medidas de custo efetivo para o desenvolvimento de componentes preliminares do sistema (software e hardware), além de verificação e testes relacionados à execução de missões realizadas por veículos submarinos reduzindo, portanto, a ocorrência de potenciais falhas. Um ambiente de simulação preciso pode auxiliar engenheiros a encontrar erros ocultos contidos no software embarcado do UUV além de favorecer uma maior introspecção dentro da dinâmica e operação do veículo. Este trabalho descreve a implementação do algoritmo de controle de um UUV em ambiente MATLAB/SIMULINK, sua conversão automática para código compilável (em C++) e a verificação de seu funcionamento diretamente no computador embarcado por meio de simulações. Detalham-se os procedimentos necessários para permitir a conversão dos modelos em MATLAB para código C++, integração do software de controle com o sistema operacional de tempo real empregado no computador embarcado (VxWORKS) e a estratégia de simulação com Hardware In The Loop (HIL) desenvolvida - A principal contribuição deste trabalho é apresentar de forma racional uma estrutura de trabalho que facilite a implementação final do software de controle no computador embarcado a partir do modelo desenvolvido em um ambiente amigável para o projetista, como o SIMULINK.
Unmanned Underwater Vehicles (UUVs) have many commercial, military, and scientific applications because of their potential capabilities and significant costperformance improvements over traditional means of obtaining valuable underwater information The development of a reliable sampling and testing platform for these vehicles requires a thorough system design and many costly at-sea trials during which systems specifications can be validated. Modeling and simulation provide a cost-effective measure to carry out preliminary component, system (hardware and software), and mission testing and verification, thereby reducing the number of potential failures in at-sea trials. An accurate simulation environment can help engineers to find hidden errors in the UUV embedded software and gain insights into the UUV operation and dynamics. This work describes the implementation of a UUV\'s control algorithm using MATLAB/SIMULINK, its automatic conversion to an executable code (in C++) and the verification of its performance directly into the embedded computer using simulations. It is detailed the necessary procedure to allow the conversion of the models from MATLAB to C++ code, integration of the control software with the real time operating system used on the embedded computer (VxWORKS) and the developed strategy of Hardware in the loop Simulation (HILS). The Main contribution of this work is to present a rational framework to support the final implementation of the control software on the embedded computer, starting from the model developed on an environment friendly to the control engineers, like SIMULINK.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

de, Graaf Niels. „Simulation of Attitude and Orbit Control for APEX CubeSat“. Thesis, Luleå tekniska universitet, Rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80736.

Der volle Inhalt der Quelle
Annotation:
CubeSats are becoming a game changer in the space industry. Appearing first for univer-sity mission, its popularity is increasing for commercial use and for deep space missionssuch as the on HERA mission that will orbit in 2026 around an asteroid as part of aplanetary defence mission. Standardisation and industrial collaboration is key to a fastdevelopment, assuring the product quality and lower development expenditures.In this study the focus is set elaborating a low cost demonstrator platform to be usedfor developing and testing onboard software on physical hardware: a Hardware-Softwaretesting facility. The purpose of such a platform is to create an interactive and accessibleenvironment for developing on board software. The application chosen to be elaboratedon this platform is a module the subsystem of attitude and orbit control of the satelliteorbiting around asteroid.In order to create this platform the simulation of the asteroid environment of theCubeSat has been made using open source software libraries. During this task the per-formance of open source libraries has been compared to commercial alternatives. In thedevelopment of simulation different orbit perturbations have been studied by modellingthe asteroid as a cube or spheroid and additionally the effect of a third perturbing bodyand radiation pressure.As part of this project two microcontroller have been set up communicating using acommunication bus and communication protocols used for space applications to simulatehow the attitude and orbit control is commanded inside the CubeSat.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ryd, Jonatan, und Jeffrey Persson. „Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method“. Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.

Der volle Inhalt der Quelle
Annotation:
Saab want to examine Hardware In the Loop method as a concept, and how an infrastructure of Hardware In the Loop would look like. Hardware In the Loop is based upon continuously testing hardware, which is simulated. The software Saab wants to use for the Hardware In the Loop method is Jenkins, which is a Continuous Integration, and Continuous Delivery tool. To simulate the hardware, they want to examine the use of an Application Programming Interface between a Raspberry Pi, and the programming language Robot Framework. The reason Saab wants this examined, is because they believe that this method can improve the rate of testing, the quality of the tests, and thereby the quality of their products.The theory behind Hardware In the Loop, Continuous Integration, and Continuous Delivery will be explained in this thesis. The Hardware In the Loop method was implemented upon the Continuous Integration and Continuous Delivery tool Jenkins. An Application Programming Interface between the General Purpose Input/Output pins on a Raspberry Pi and Robot Framework, was developed. With these implementations done, the Hardware In the Loop method was successfully integrated, where a Raspberry Pi was used to simulate the hardware.
Saab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Alluri, Veerendra Bhargav. „MULTIPLE CHANNEL COHERENT AMPLITUDE MODULATED (AM) TIME DIVISION MULTIPLEXING (TDM) SOFTWARE DEFINED RADIO (SDR) RECEIVER“. UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/499.

Der volle Inhalt der Quelle
Annotation:
It is often required in communication and navigation systems to be able to receive signals from multiple stations simultaneously. A common practice to do this is to use multiple hardware resources; a different set of resources for each station. In this thesis, a Coherent Amplitude Modulated (AM) receiver system was developed based on Software Defined Radio (SDR) technology enabling reception of multiple signals using hardware resources needed only for one station. The receiver system architecture employs Time Division Multiplexing (TDM) to share the single hardware resource among multiple streams of data. The architecture is designed so that it can be minimally modified to support any number of stations. The Verilog Hardware Description Language (HDL) was used to capture the receiver system architecture and design. The design and architecture are initially validated using HDL post-synthesis and post-implementation simulation. In addition, the receiver system architecture and design were implemented to a Xilinx Field Programmable Gate Array (FPGA) technology prototyping board for experimental testing and final validation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Haffar, Mohamad. „Développement d'une plateforme de co-simulation en vue de validation et d'évaluation de performances des systèmes de communication pour les installations de distribution électriques“. Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT043.

Der volle Inhalt der Quelle
Annotation:
Un système de distribution électrique est le cœur de tous types de sites industriels, aussi bien les sites producteurs d'énergie que les sites consommateurs. La sécurité de ce système doit être impérativement assurée par la mise en place des unités assurant plusieurs fonctionnalités de protection contre les dédauts électriques. Parmi ces fonctionalités il existe celles qui se basent sur des échanges d'information entre plusieurs unités de protection. Le standard IEC 61850 guarantit cet échange des informations via des signaux ‘temps réel' échangé via le réseau de communication. Vue l'aspet non deterministe de ces signaux, une étude poussée de leur fiabilité doit être effectuée. Pour ces raisons notre travail de thèse a pour objectif de mettre en place une méthodologie, basée sur une plateforme de Co-Simulation conçue pendant notre étude, qui permet la validation de la fiabilité de ces messages tout au long du cycle de vie d'un système de communication IEC 61850
From 2004, a new worldwide standard of communication IEC61850 is introduced in the majority of substation automation system carrying out new innovation prospects to the world of substation. One of these feature is that it allows the exchange of security real time communication messages all over the communication network. These messages are used as control information for the Distributed Automation Application 'DAA'. Taking into consideration that DAA have a direct effect on ythe dependability of a smart grid architecture, the fiability of these real time IEC 61850 should be evaluated. For these reasons, our research delas with the development of a Co-Simulation platform that permits the evaluation and validation of an IEC 61850 communication network
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Brink, Michael Joseph. „Hardware-in-the-loop simulation of pressurized water reactor steam-generator water-level control, designed for use within physically distributed testing environments“. The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357273230.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

França, André Luiz Pereira de. „Estudo, desenvolvimento e implementação de algoritmos de aprendizagem de máquina, em software e hardware, para detecção de intrusão de rede: uma análise de eficiência energética“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1166.

Der volle Inhalt der Quelle
Annotation:
CAPES; CNPq
O constante aumento na velocidade da rede, o número de ataques e a necessidade de eficiência energética estão fazendo com que a segurança de rede baseada em software chegue ao seu limite. Um tipo comum de ameaça são os ataques do tipo probing, nos quais um atacante procura vulnerabilidades a partir do envio de pacotes de sondagem a uma máquina-alvo. Este trabalho apresenta o estudo, o desenvolvimento e a implementação de um algoritmo de extração de características dos pacotes da rede em hardware e de três classificadores de aprendizagem de máquina (Árvore de Decisão, Naive Bayes e k-vizinhos mais próximos), em software e hardware, para a detecção de ataques do tipo probing. O trabalho apresenta, ainda resultados detalhados de acurácia de classificação, taxa de transferência e consumo de energia para cada implementação.
The increasing network speeds, number of attacks, and need for energy efficiency are pushing software-based network security to its limits. A common kind of threat is probing attacks, in which an attacker tries to find vulnerabilities by sending a series of probe packets to a target machine. This work presents the study, development, and implementation of a network packets feature extraction algorithm in hardware and three machine learning classifiers (Decision Tree, Naive Bayes, and k-nearest neighbors), in software and hardware, for the detection of probing attacks. The work also presents detailed results of classification accuracy, throughput, and energy consumption for each implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Palm, Johan. „High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems“. Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7477.

Der volle Inhalt der Quelle
Annotation:

The Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.

Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.

In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.

Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Vlach, Jan. „Algoritmy souběžného technického a programového návrhu“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412761.

Der volle Inhalt der Quelle
Annotation:
This master's thesis deals with a parallel design of the program and a technical equipment of embedded systems. It involves both a general description of the whole process and an illustration of the design, a simulation and implementation of the FIR filter. It also includes a description of the proposed program Polis and the simulation system Ptolemy. The conclusion of the project is devoted to a generation of simulation models in VHDL language incl. a subsequent synthesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Njoyah, ntafam Perrin. „Méthodologie d'identification et d'évitement des cycles de gel du processeur pour l'optimisation de la performance du logiciel sur le matériel“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM021/document.

Der volle Inhalt der Quelle
Annotation:
L’un des objectifs de la microélectronique est de concevoir et fabriquer des SoCs de petites tailles, à moindre coût et visant des marchés tel que l’internet des objets. À matériel fixe sur lequel l’on ne dispose d’aucune marge de manœuvre, l’un des challenges pour un développeur de logiciels embarqués est d’écrire son programme de manière à ce qu’à l’exécution, le logiciel développé puisse utiliser au mieux les capacités de ces SoCs. Cependant, ces programmes n’utilisent pas toujours correctement les capacités de traitement disponibles sur le SoC. L’estimation et l’optimisation de la performance du logiciel devient donc une activité cruciale. A l’exécution, ces programmes sont très souvent victimes de l’apparition de cycles de gel de processeur dus à l’absence de données en mémoire cache. Il existe plusieurs approches permettant d’éviter ces cycles de gel de processeur. Par l’exemple l’utilisation des options de compilation adéquates pour la génération du meilleur code exécutable possible. Cependant les compilateurs n’ont qu’une idée abstraite (sous forme de formules analytiques) de l’architecture du matériel sur lequel le logiciel s’exécutera. Une alternative est l’utilisation des processeurs « Out–Of–Order ». Mais ces processeurs sont très couteux en terme de coût de fabrication car nécessites une surface de silicium importante pour l’implantation de ces mécanismes. Dans cette thèse, nous proposons une méthode itérative basée sur les plateformes virtuelles précises au niveau du cycle qui permet d’identifier les instructions du programme à optimiser responsables à l’exécution, de l’apparition des cycles de gel de processeur dus à l’absence de données dans le cache L1. L’objectif est de fournir au développeur des indices sur les emplacements du code source de son programme en langage de haut niveau (C/C++ typiquement) qui sont responsables de ces gels. Pour chacune de ces instructions, nous fournissons leur contribution au rallongement du temps d’exécution totale du programme. Finalement nous estimons le gain potentiel maximal qu’il est possible d’obtenir si tous les cycles de gel identifiés sont évités en insérant manuellement dans le code source du programme à optimiser, des instructions de pré–chargement de données dirigé par le logiciel
One of microelectronics purposes is to design and manufacture small-sized, low-cost SoCs targeting markets such as the Internet of Things. With fixed hardware on which there is no possible flexibility, one of the challenges for an embedded software developer is to write his program so that, at runtime, the software developed can make the best use of these SoC capabilities. However, these programs do not always properly use the available SoC processing capabilities. Software performance estimation and optimization is then a crucial activity. At runtime, these programs are very often victims of processor data stall cycles. There are several approaches to avoiding these processor data stall cycles. For example, using the appropriate compilation options to generate the best executable code. However, the compilers have only an abstract knowledge (as analytical formulas) of the hardware architecture on which the software will be executed. Another way of solving this issue is to use Out-Of- Order processors. But these processors are very expensive in terms of manufacturing cost because they require a large silicon surface for the implementation of the Out-Of-Order mechanism. In this thesis, we propose an iterative methodology based on cycle accurate virtual platforms, which helps identifying precisely instructions of the program which are responsible of the generation of processor data stall cycles. The goal is to provide the developer with clues on the source code lignes of his program’s in high level language (C/C++ typically) which are responsible of these stalls. For each instructions, we provide their contribution to lengthening of the total program execution time. Finally, we estimate the maximum potential gain that can be achieved if all identified stall cycles are avoided by manually inserting software preloading instructions into the source code of the program to optimize
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Rakotozafy, Andriamaharavo. „Simulation temps réel de dispositifs électrotechniques“. Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0385/document.

Der volle Inhalt der Quelle
Annotation:
Les contrôleurs industriels font l’objet de changements de paramètres, de modifications, d’améliorations en permanence. Ils subissent les évolutions technologiques aussi bien matérielles que logicielles (librairies, système d’exploitation, loi de commande...). Malgré ces contraintes, ces contrôleurs doivent obligatoirement assurer toutes les fonctionnalités recouvrant le séquentiel, les protections, l’interface homme machine et la stabilité du système à contrôler. Ces fonctionnalités doivent être couvertes pour une large gamme d’applications. Chaque modification (matérielle ou logicielle) quoique mineure est risquée. Le debogage, l’analyse et la programmation sur site sont énormément coûteux surtout pour des sites de type offshore ou marine. Les conditions de travail sont difficiles et les tests sont réduits au strict minimum. Cette thèse propose deux niveaux de validation en plateforme d’expérimentation : un niveau de validation algorithmique que l’on appelle Validation par Interface Logicielle (VIL) traitée au chapitre 2 ; un niveau de validation physique que l’on appelle Validation par Interface Matérielle (VIM) traitée au chapitre 3. La VIL valide uniquement l’aspect algorithme, la loi de commande et la conformité des références au niveau calcul sans prendre en compte les signaux de commande physiques et les signaux de retour gérés par l’Unité de Gestion des Entrées/Sorties (UGES). Un exemple de validation d’un contrôleur industriel d’un ensemble convertisseur trois niveaux et machine asynchrone est traité dans le deuxième chapitre avec une modélisation particulièrement adaptée à la VIL. Le dernier chapitre traite la VIM sur différentes bases matérielles (Field Programmable Gate Array (FPGA), processeurs). Cette validation prend en compte l’aspect algorithme et les signaux de commande physique ainsi que les signaux de retour. On y présente plusieurs approches de modélisation, choisies selon la base matérielle d’implémentation du simulateur temps réel. Ces travaux ont contribué aujourd’hui à au processus de validation des contrôleurs dédiés aux applications Oil and Gaz et Marine de General Electric - Power Conversion © (GE-PC)
Industrial controllers are always subjected to parameters change, modifications and permanent improvements. They have to follow off-the-shelf technologies as well as hardware than software (libraries, operating system, control regulations ...). Apart from these primary necessities, additional aspects concerning the system operation that includes sequential, protections, human machine interface and system stability have to be implemented and interfaced correctly. In addition, these functions should be generically structured to be used in common for wide range of applications. All modifications (hardware or software) even slight ones are risky. In the absence of a prior validation system, these modifications are potentially a source of system instability or damage. On-site debugging and modification are not only extremely expensive but can be highly risky, cumulate expenditure and reduce productivity. This concerns all major industrial applications, Oil & Gas installations and Marine applications. Working conditions are difficult and the amount of tests that can be done is strictly limited to the mandatory ones. This thesis proposes two levels of industrial controller validation which can be done in experimental test platform : an algorithm validation level called Software In the Loop (SIL) treated in the second chapter ; a physical hardware validation called Hardware In the Loop (HIL) treated in the third chapter. The SIL validates only the control algorithm, the control law and the computed references without taking into account neither the actual physical commands nor the physical input feedbacks managed by the Input/Output boards. SIL validation of the system where industrial asynchronous motor is fed and regulated by a three level Variable Speed Drive with a three level voltage source converter is treated in the second chapter with a particular modeling approach adapted to such validation. The last chapter presents the HIL validation with various hardware implementations (Field Programmable Gate Array (FPGA), processors). Such validation checks both the control algorithm and the actual physical Input/Output signals generated by the dedicated boards. Each time, the modeling approach is chosen according to the hardware implementation. Currently this work has contributed to the system validation used by General Electric - Power Conversion © (GE-PC) as part of their validation phase that is mandatory for Oil & Gas projects and Marine applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Kekely, Lukáš. „Hardwarová akcelerace aplikací pro monitorování a bezpečnost vysokorychlostních sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236345.

Der volle Inhalt der Quelle
Annotation:
This master's thesis deals with the design of software controlled hardware acceleration system for high-speed networks. The main goal is to provide easy access to acceleration for various network security and monitoring applications. The proposed system is designed for 100 Gbps networks. It enables high-speed processing on an FPGA card together with flexible software control. The combination of hardware speed and software flexibility allows easy creation of complex high-performance network applications.  Achievable performance improvement of three chosen monitoring and security applications is shown using simulation model of the designed system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Goyal, Sachin. „Power network in the loop : subsystem testing using a switching amplifier“. Queensland University of Technology, 2009. http://eprints.qut.edu.au/26521/.

Der volle Inhalt der Quelle
Annotation:
“Hardware in the Loop” (HIL) testing is widely used in the automotive industry. The sophisticated electronic control units used for vehicle control are usually tested and evaluated using HIL-simulations. The HIL increases the degree of realistic testing of any system. Moreover, it helps in designing the structure and control of the system under test so that it works effectively in the situations that will be encountered in the system. Due to the size and the complexity of interaction within a power network, most research is based on pure simulation. To validate the performance of physical generator or protection system, most testing is constrained to very simple power network. This research, however, examines a method to test power system hardware within a complex virtual environment using the concept of the HIL. The HIL testing for electronic control units and power systems protection device can be easily performed at signal level. But performance of power systems equipments, such as distributed generation systems can not be evaluated at signal level using HIL testing. The HIL testing for power systems equipments is termed here as ‘Power Network in the Loop’ (PNIL). PNIL testing can only be performed at power level and requires a power amplifier that can amplify the simulation signal to the power level. A power network is divided in two parts. One part represents the Power Network Under Test (PNUT) and the other part represents the rest of the complex network. The complex network is simulated in real time simulator (RTS) while the PNUT is connected to the Voltage Source Converter (VSC) based power amplifier. Two way interaction between the simulator and amplifier is performed using analog to digital (A/D) and digital to analog (D/A) converters. The power amplifier amplifies the current or voltage signal of simulator to the power level and establishes the power level interaction between RTS and PNUT. In the first part of this thesis, design and control of a VSC based power amplifier that can amplify a broadband voltage signal is presented. A new Hybrid Discontinuous Control method is proposed for the amplifier. This amplifier can be used for several power systems applications. In the first part of the thesis, use of this amplifier in DSTATCOM and UPS applications are presented. In the later part of this thesis the solution of network in the loop testing with the help of this amplifier is reported. The experimental setup for PNIL testing is built in the laboratory of Queensland University of Technology and the feasibility of PNIL testing has been evaluated using the experimental studies. In the last section of this thesis a universal load with power regenerative capability is designed. This universal load is used to test the DG system using PNIL concepts. This thesis is composed of published/submitted papers that form the chapters in this dissertation. Each paper has been published or submitted during the period of candidature. Chapter 1 integrates all the papers to provide a coherent view of wide bandwidth switching amplifier and its used in different power systems applications specially for the solution of power systems testing using PNIL.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

King, Jonathan Charles. „Model-Based Design of a Plug-In Hybrid Electric Vehicle Control Strategy“. Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/34962.

Der volle Inhalt der Quelle
Annotation:
For years the trend in the automotive industry has been toward more complex electronic control systems. The number of electronic control units (ECUs) in vehicles is ever increasing as is the complexity of communication networks among the ECUs. Increasing fuel economy standards and the increasing cost of fuel is driving hybridization and electrification of the automobile. Achieving superior fuel economy with a hybrid powertrain requires an effective and optimized control system. On the other hand, mathematical modeling and simulation tools have become extremely advanced and have turned simulation into a powerful design tool. The combination of increasing control system complexity and simulation technology has led to an industry wide trend toward model based control design. Rather than using models to analyze and validate real world testing data, simulation is now the primary tool used in the design process long before real world testing is possible. Modeling is used in every step from architecture selection to control system validation before on-road testing begins. The Hybrid Electric Vehicle Team (HEVT) of Virginia Tech is participating in the 2011-2014 EcoCAR 2 competition in which the team is tasked with re-engineering the powertrain of a GM donated vehicle. The primary goals of the competition are to reduce well to wheels (WTW) petroleum energy use (PEU) and reduce WTW greenhouse gas (GHG) and criteria emissions while maintaining performance, safety, and consumer acceptability. This paper will present systematic methodology for using model based design techniques for architecture selection, control system design, control strategy optimization, and controller validation to meet the goals of the competition. Simple energy management and efficiency analysis will form the primary basis of architecture selection. Using a novel method, a series-parallel powertrain architecture is selected. The control system architecture and requirements is defined using a systematic approach based around the interactions between control units. Vehicle communication networks are designed to facilitate efficient data flow. Software-in-the-loop (SIL) simulation with Mathworks Simulink is used to refine a control strategy to maximize fuel economy. Finally hardware-in-the-loop (HIL) testing on a dSPACE HIL simulator is demonstrated for performance improvements, as well as for safety critical controller validation. The end product of this design study is a control system that has reached a high level of parameter optimization and validation ready for on-road testing in a vehicle.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Du, Wan. „Modélisation et simulation de réseaux de capteurs sans fil“. Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00690466.

Der volle Inhalt der Quelle
Annotation:
Cette thèse traite de la modélisation et la simulation de réseaux de capteurs sans fil afin de fournir des estimations précises de consommations d'énergie. Un cadre de conception et de simulation base sur SystemC au niveau système est proposé, nommé IDEA1. Elle permet l'exploration de l'espace de conception de réseaux de capteurs à un stade amont. Les résultats de simulation comprennent le taux de livraison de paquets, la latence de transmission et les consommations d'énergie. Sur un banc d'essai comportant 9 nœuds, la différence moyen entre les IDEA1 simulations et les mesures expérimentales est 4.6 %. Les performances d'IDEA1 sont comparées avec un autre simulateur largement utilisé, NS-2. Avec la co-simulation matérielle et logicielle, IDEA1 peut apporter des modèles plus détaillés de nœuds de capteurs. Pour fournir les résultats de la simulation au même niveau d'abstraction, IDEA1 réalise les simulations deux fois plus vite que NS-2.Enfin, deux études de cas sont accomplies pour valider le flot de conception d'IDEA1. La performance de l'IEEE 802.15.4 est globalement évaluée pour diverses charges de trafic et configurations de paramètres de protocole. Une application de contrôle actif des vibrations est également étudiée. Les simulations d'IDEA1 trouvent le meilleur choix de protocoles de communication.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Mekala, Priyanka. „Field Programmable Gate Array Based Target Detection and Gesture Recognition“. FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/723.

Der volle Inhalt der Quelle
Annotation:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Zeffer, Håkan. „Towards Low-Complexity Scalable Shared-Memory Architectures“. Doctoral thesis, Uppsala University, Department of Information Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7135.

Der volle Inhalt der Quelle
Annotation:

Plentiful research has addressed low-complexity software-based shared-memory systems since the idea was first introduced more than two decades ago. However, software-coherent systems have not been very successful in the commercial marketplace. We believe there are two main reasons for this: lack of performance and/or lack of binary compatibility.

This thesis studies multiple aspects of how to design future binary-compatible high-performance scalable shared-memory servers while keeping the hardware complexity at a minimum. It starts with a software-based distributed shared-memory system relying on no specific hardware support and gradually moves towards architectures with simple hardware support.

The evaluation is made in a modern chip-multiprocessor environment with both high-performance compute workloads and commercial applications. It shows that implementing the coherence-violation detection in hardware while solving the interchip coherence in software allows for high-performing binary-compatible systems with very low hardware complexity. Our second-generation hardware-software hybrid performs on par with, and often better than, traditional hardware-only designs.

Based on our results, we conclude that it is not only possible to design simple systems while maintaining performance and the binary-compatibility envelope, it is often possible to get better performance than in traditional and more complex designs.

We also explore two new techniques for evaluating a new shared-memory design throughout this work: adjustable simulation fidelity and statistical multiprocessor cache modeling.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kreku, J. (Jari). „Early-phase performance evaluation of computer systems using workload models and SystemC“. Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299902.

Der volle Inhalt der Quelle
Annotation:
Abstract Novel methods and tools are needed for the performance evaluation of future embedded systems due to the increasing system complexity. Systems accommodate a large number of on-terminal and or downloadable applications offering the users with numerous services related to telecommunication, audio and video, digital television, internet and navigation. More flexibility, scalability and modularity is expected from execution platforms to support applications. Digital processing architectures will evolve from the current system-on-chips to massively parallel computers consisting of heterogeneous subsystems connected by a network-on-chip. As a consequence, the overall complexity of system evaluation will increase by orders of magnitude. The ABSOLUT performance simulation approach presented in this thesis combats evaluation complexity by abstracting the functionality of the applications with workload models consisting of instruction-like primitives. Workload models can be created from application specifications, measurement results, execution traces, or the source code. Complexity of execution platform models is also reduced since the data paths of processing elements need not be modelled in detail and data transfers and storage are simulated only from the performance point of view. The modelling approach enables early evaluation since mature hardware or software is not required for the modelling or simulation of complete systems. ABSOLUT is applied to a number of case studies including mobile phone usage, MP3 playback, MPEG4 encoding and decoding, 3D gaming, virtual network computing, and parallel software-defined radio applications. The platforms used in the studies represent both embedded systems and personal computers, and at the same time both currently existing platforms and future designs. The results obtained from simulations are compared to measurements from real platforms, which reveals an average difference of 12% in the results. This exceeds the accuracy requirements expected from virtual system-based simulation approaches intended for early evaluation
Tiivistelmä Sulautettujen tietokonejärjestelmien suorituskyvyn arviointi muuttuu yhä haastavammaksi järjestelmien kasvavan kompleksisuuden vuoksi. Järjestelmissä on suuri määrä sovelluksia, jotka tarjoavat käyttäjälle palveluita liittyen esimerkiksi telekommunikaatioon, äänen ja videokuvan toistoon, internet-selaukseen ja navigaatioon. Tästä johtuen suoritusalustoilta edellytetään yhä enemmän joustavuutta, skaalautuvuutta ja modulaarisuutta. Suoritusarkkitehtuurit kehittyvät nykyisistä System-on-Chip (SoC) -ratkaisuista Network-on-Chip (NoC) -rinnakkaistietokoneiksi, jotka koostuvat heterogeenisistä alijärjestelmistä. Sovellusten ja suoritusalustan muodostaman järjestelmän suorituskyvyn arviointiin tarvitaan uusia menetelmiä ja työkaluja, joilla kompleksisuutta voidaan hallita. Tässä väitöskirjassa esitettävä ABSOLUT-simulointimenetelmä pienentää suorituskyvyn arvioinnin kompleksisuutta abstrahoimalla sovelluksen toiminnallisuutta työkuormamalleilla, jotka koostuvat kuormaprimitiiveistä suorittimen käskyjen sijaan. Työkuormamalleja voidaan luoda sovellusten spesifikaatioista, mittaustuloksista, suoritusjäljistä tai sovellusten lähdekoodeista. Suoritusalustoista ABSOLUT-menetelmä käyttää yksinkertaisia kapasiteettimalleja toiminnallisten mallien sijaan: suoritinarkkitehtuurit mallinnetaan korkealla tasolla ja tiedonsiirto ja tiedon varastointi mallinnetaan vain suorituskyvyn näkökulmasta. Menetelmä mahdollistaa aikaisen suorituskyvyn arvioinnin, koska malleja voidaan luoda ja simuloida jo ennen valmiin sovelluksen tai suoritusalustan olemassaoloa. ABSOLUT-menetelmää on käytetty useissa erilaisissa kokeiluissa, jotka sisälsivät esimerkiksi matkapuhelimen käyttöä, äänen ja videokuvan toistoa ja tallennusta, 3D-pelin pelaamista ja digitaalista tiedonsiirtoa. Esimerkeissä käytetiin tyypillisiä suoritusalustoja sekä kotitietokoneiden että sulautettujen järjestelmien maailmasta. Lisäksi osa esimerkeistä pohjautui tuleviin tai keksittyihin suoritusalustoihin. Osa simuloinneista on varmennettu vertaamalla simulointituloksia todellisista järjestelmistä saatuihin mittaustuloksiin. Niiden välillä huomattiin keskimäärin 12 prosentin poikkeama, mikä ylittää aikaisen vaiheen suorituskyvyn simulointimenetelmiltä vaadittavan tarkkuuden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Manning, Peter Christopher. „Development of a Series Parallel Energy Management Strategy for Charge Sustaining PHEV Operation“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49436.

Der volle Inhalt der Quelle
Annotation:
The Hybrid Electric Vehicle Team of Virginia Tech (HEVT) is participating in the 2012-2014 EcoCAR 2: Plugging in to the Future Advanced Vehicle Technology Competition series organized by Argonne National Lab (ANL), and sponsored by General Motors Corporation (GM) and the U.S. Department of Energy (DOE). The goals of the competition are to reduce well-to-wheel (WTW) petroleum energy consumption (PEU), WTW greenhouse gas (GHG) and criteria emissions while maintaining vehicle performance, consumer acceptability and safety. Following the EcoCAR 2 Vehicle Development Process (VDP) of designing, building, and refining an advanced technology vehicle over the course of the three year competition using a 2013 Chevrolet Malibu donated by GM as a base vehicle, the selected powertrain is a Series-Parallel Plug-In Hybrid Electric Vehicle (PHEV) with P2 (between engine and transmission) and P4 (rear axle) motors, a lithium-ion battery pack, an internal combustion engine, and an automatic transmission. Development of a charge sustaining control strategy for this vehicle involves coordination of controls for each of the main powertrain components through a distributed control strategy. This distributed control strategy includes component controllers for each individual component and a single supervisory controller responsible for interpreting driver demand and determining component commands to meet the driver demand safely and efficiently. For example, the algorithm accounts for a variety of system operating points and will penalize or reward certain operating points for other conditions. These conditions include but are not limited to rewards for discharging the battery when the state of charge (SOC) is above the target value or penalties for operating points with excessive emissions. Development of diagnostics and remedial actions is an important part of controlling the powertrain safely. In order to validate the control strategy prior to in-vehicle operation, simulations are run against a plant model of the vehicle systems. This plant model can be run in both controller Software- and controller Hardware-In-the-Loop (SIL and HIL) simulations. This paper details the development of the controls for diagnostics, major selection algorithms, and execution of commands and its integration into the Series-Parallel PHEV through the supervisory controller. This paper also covers the plant model development and testing of the control algorithms using controller SIL and HIL methods. This paper details reasons for any changes to the control system, and describes improvements or tradeoffs that had to be made to the control system architecture for the vehicle to run reliably and meet its target specifications. Test results illustrate how changes to the plant model and control code properly affect operation of the control system in the actual vehicle. The VT Malibu is operational and projected to perform well at the final competition.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lövgren, Simon. „Simulating Energy-Efficient Hardware The Software Out-of-order Processor“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332801.

Der volle Inhalt der Quelle
Annotation:
The modern trends for technology scaling are not extremely bright. The cost of transistors have leveled off recently, effectively halting the ability to put additional transistors on a chip for the same price. In addition, Dennard Scaling, what has allowed for switching additional transistors whilst scaling to smaller nodes isslowing significantly. This thesis, with focus on the hardware, proposes anenhanced stall-on-use in-order core hardware/software co-design which improves performance and energy efficiency by allowing out-of-program-order executionthrough allowing the hardware and software to communicate with one another --allowing the hardware to make dynamic decisions on how to direct execution flowto expose additional memory- and instruction level parallelism.The results are very promising where we see an increase in both performance (upto 3.7x speedup) and energy efficiency (up to 59% increase). While additional worki is needed to evaluate the extent of the benefits across a wide range of applications, SWOOP looks to be a good option for energy efficiency without compromisingperformance for memory-bound applications with MLP.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Zhang, Jingyao. „Hardware-Software Co-Design for Sensor Nodes in Wireless Networks“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/50972.

Der volle Inhalt der Quelle
Annotation:
Simulators are important tools for analyzing and evaluating different design options for wireless sensor networks (sensornets) and hence, have been intensively studied in the past decades. However, existing simulators only support evaluations of protocols and software aspects of sensornet design. They cannot accurately capture the significant impacts of various hardware designs on sensornet performance.  As a result, the performance/energy benefits of customized hardware designs are difficult to be evaluated in sensornet research. To fill in this technical void, in first section, we describe the design and implementation of SUNSHINE, a scalable hardware-software emulator for sensornet applications.
SUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware up to cycle-level accuracy through its seamless integration of three existing sensornet simulators: a network simulator TOSSIM, an instruction-set simulator SimulAVR and a hardware simulator
GEZEL. SUNSHINE solves several sensornet simulation challenges, including data exchanges and time synchronization across different simulation domains and simulation accuracy levels. SUNSHINE also provides hardware specification scheme for simulating flexible and customized hardware designs. Several experiments are given to illustrate SUNSHINE\'s simulation capability. Evaluation results are provided to demonstrate that SUNSHINE is an efficient tool for software-hardware co-design in sensornet research.

Even though SUNSHINE can simulate flexible sensor nodes (nodes contain FPGA chips as coprocessors) in wireless networks, it does not estimate power/energy consumption of sensor nodes. So far, no simulators have been developed to evaluate the performance of such flexible nodes in wireless networks. In second section, we present PowerSUNSHINE, a power- and energy-estimation tool that fills the void. PowerSUNSHINE is the first scalable power/energy estimation tool for WSNs that provides an accurate prediction for both fixed and flexible sensor nodes. In the section, we first describe requirements and challenges of building PowerSUNSHINE. Then, we present power/energy models for both fixed and flexible sensor nodes. Two testbeds, a MicaZ platform and a flexible node consisting of a microcontroller, a radio and a FPGA based co-processor, are provided to demonstrate the simulation fidelity of PowerSUNSHINE. We also discuss several evaluation results based on simulation and testbeds to show that PowerSUNSHINE is a scalable simulation tool that provides accurate estimation of power/energy consumption for both fixed and flexible sensor nodes.

Since the main components of sensor nodes include a microcontroller and a wireless transceiver (radio), their real-time performance may be a bottleneck when executing computation-intensive tasks in sensor networks. A coprocessor can alleviate the burden of microcontroller from multiple tasks and hence decrease the probability of dropping packets from wireless channel. Even though adding a coprocessor would gain benefits for sensor networks, designing applications for sensor nodes with coprocessors from scratch is challenging due to the consideration of design details in multiple domains, including software, hardware, and network. To solve this problem, we propose a hardware-software co-design framework for network applications that contain multiprocessor sensor nodes. The framework includes a three-layered architecture for multiprocessor sensor nodes and application interfaces under the framework. The layered architecture is to make the design of multiprocessor nodes\' applications flexible and efficient. The application interfaces under the framework are implemented for deploying reliable applications of multiprocessor sensor nodes. Resource sharing technique is provided to make processor, coprocessor and radio work coordinately via communication bus. Several testbeds containing multiprocessor sensor nodes are deployed to evaluate the effectiveness of our framework. Network experiments are executed in SUNSHINE emulator to demonstrate the benefits of using multiprocessor sensor nodes in many network scenarios.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Tjerngren, Jon. „Modeling and Hardware-in-the-loop Simulations of Contactor Dynamics : Mechanics, Electromagnetics and Software“. Thesis, Linköpings universitet, Institutionen för systemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-107744.

Der volle Inhalt der Quelle
Annotation:
This master thesis’s subject is to model an ABB contactor’s dynamics and to develop a hardware-in-the-loop simulation environment. The hardware-in-the-loop method utilizes computer models that are simulated in a real-time simulator. The real-time simulator is connected to hardware components. A contactor is an electrically controlled mechanical switching device and it is used in circuits where large currents can occur. In this thesis, the contactor is divided into three separate subsystems and models are developed for each of them. The three subsystems correspond to the contactor’s mechanics, electromagnetics and electronic components. Computer models are implemented in MATLAB and Simulink to realize the subsystems. The hardware part, of the hardware-in-the-loop simulations, consists of electronic parts that are not modeled. To connect the hardware part to a real-time simulator, from dSPACE, a hardware interface was constructed. This report focuses on the modeling of the mechanics and the electromagnetics as well as the software implementations. The thesis work was carried out in collaboration with another student. The focuses of his report are the modeling of the electronics and the construction of the hardware interface. Validation of the hardware-in-the-loop simulations is done by using measurements collected from a real contactor. The conclusion is that the simulations of the contactor’s behavior correspond well with a real contactor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Cunningham, Larry E. „A Programmable PCM Data Simulator for Microcomputer Hosts“. International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613390.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Modem microcomputers are proving to be viable hosts for telemetry functions, including data simulators. A specialized high-performance hardware architecture for generating and processing simulator data can be implemented on an add-in card for the microcomputer. Support software implemented on the host provides a simple, high-quality human interface with a high degree of user programmability. Based on this strategy, the Physical Science Laboratory at New Mexico State University (PSL) is developing a Programmable PCM Data Simulator for microcomputer hosts. Specifications and hardware/software architectures for PSL’s Programmable PCM Data Simulator are discussed, as well as its interactive user interface.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie