Dissertationen zum Thema „Simulation – Hardware – Software“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Simulation – Hardware – Software" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Lu, Lipin. „Simulation Software and Hardware for Teaching Ultrasound“. Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/143.
Der volle Inhalt der QuelleBrankovic, Aleksandar. „Performance simulation methodologies for hardware/software co-designed processors“. Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/287978.
Der volle Inhalt der QuelleEls processadors co-dissenyats Hardware/Software (HW/SW co-designed processors) han estat proposats per l'acadèmia i la indústria com a solucions potencials per a fabricar processadors menys complexos i que consumeixen menys energia. A diferència d'altres alternatives, aquest tipus de processadors redueixen la complexitat i el consum d'energia aplicant traducció y optimització dinàmica de binaris des d'un repertori d'instruccions (instruction set architecture) extern cap a un repertori d'instruccions intern adaptat. Aquesta tesi intenta resoldre els reptes relacionats a la simulació d'aquest tipus d'arquitectures. La simulació és un procés comú en el disseny i desenvolupament de processadors ja que permet explorar diverses alternatives sense haver de fabricar el hardware per a cadascuna d'elles. La simulació de processadors co-dissenyats Hardware/Software és un procés més complex que la simulació de processadores tradicionals, purament hardware. Per exemple, no existeixen eines de simulació disponibles per a la comunitat. Per tant, els investigadors acostumen a assumir que la capa de software, que s'encarrega de la traducció i optimització de les aplicacions, no té un pes específic i, per tant, uns costos computacionals baixos o constants en el millor dels casos. En aquesta tesis demostrem que aquestes premisses són incorrectes i que els resultats amb aquestes acostumen a ser molt imprecisos. Una primera conclusió d'aquesta tesi doncs és que la simulació de la capa software és totalment necessària. A més a més, degut a que els processos de simulació són lents, s'han proposat tècniques de simulació que intenten obtenir resultats precisos en el menor temps possible. Una pràctica habitual és la simulació només de parts de les aplicacions, anomenades mostres, en el disseny de processadors convencionals, purament hardware. Aquestes mostres corresponen a diferents fases de les aplicacions i acostumen a ser de pocs milions d'instruccions. Per tal d'aconseguir un estat microarquitectònic acurat per a cadascuna de les mostres, s'acostumen a estressar aquestes estructures microarquitectòniques del simulador abans de començar a extreure resultats, procés anomenat "escalfament" (warm-up). Desafortunadament, aquesta metodologia no pot ser aplicada a processadors co-dissenyats Hardware/Software. L'"escalfament" de les estructures internes del simulador en el disseny de processadores co-dissenyats Hardware/Software són 3-4 ordres de magnitud més gran que el mateix procés d' "escalfament" en simulacions de processadors convencionals, ja que en els primers cal "escalfar" també les estructures i l'estat de la capa software. En aquesta tesi proposem tècniques de simulació basades en l' "escalfament" de les estructures que redueixen el temps de simulació en 65X amb un error mig del 0,75%. Aquests resultats són extrapolables a diferents configuracions del hardware i de la capa software. Finalment, les tècniques convencionals de selecció de mostres d'aplicacions a simular no són aplicables tampoc a la simulació de processadors co-dissenyats Hardware/Software degut a que les mostres es comporten de manera molt diferent quan es té en compte la capa software. En aquesta tesi, proposem un nou algorisme que redueix 3X el nombre de mostres a simular comparat amb els algorismes tradicionals per a processadors convencionals per a obtenir un error similar. Aquests resultats també són extrapolables a diferents configuracions de hardware i de software. En conclusió, en aquesta tesi es respon al repte de com simular processadors co-dissenyats Hardware/Software, que són una alternativa al disseny tradicional de processadors. Hem demostrat que cal simular la capa software i s'han proposat noves tècniques i algorismes eficients d' "escalfament" i selecció de mostres que són tolerants a diferents configuracions
Blumer, Aric David. „Register Transfer Level Simulation Acceleration via Hardware/Software Process Migration“. Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29380.
Der volle Inhalt der QuellePh. D.
Yildirim, Gokce. „Smoke Simulation On Programmable Graphics Hardware“. Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606545/index.pdf.
Der volle Inhalt der QuelleFreitas, Arthur. „Hardware/Software Co-Verification Using the SystemVerilog DPI“. Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700941.
Der volle Inhalt der QuelleWells, George James. „Hardware emulation and real-time simulation strategies for the concurrent development of microsatellite hardware and software“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62899.pdf.
Der volle Inhalt der QuelleLiu, Tsun-Ho. „Future hardware realization of self-organizing learning array and its software simulation“. Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1174680878.
Der volle Inhalt der QuelleHerfs, Werner Josef. „Modellbasierte Software in the Loop Simulation von Werkzeugmaschinen /“. Aachen : Apprimus-Verl, 2010. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=018939251&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Der volle Inhalt der QuelleBergström, Christoffer. „Simulation Framework of embedded systems in armored vehicle design“. Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185123.
Der volle Inhalt der QuelleTang, Yi. „SUNSHINE: Integrate TOSSIM and P-Sim“. Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/40721.
Der volle Inhalt der QuelleMaster of Science
Underwood, Ryan C. „An open framework for highly concurrent hardware-in-the-loop simulation“. Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/pdf/Underwood_09007dcc8042c7c7.pdf.
Der volle Inhalt der QuelleVita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed February 14, 2008) Includes bibliographical references (p. 37-40).
Zhang, Jingyao. „SUNSHINE: A Multi-Domain Sensor Network Simulator“. Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/45146.
Der volle Inhalt der QuelleMaster of Science
Rafeeq, Akhil Ahmed. „A Development Platform to Evaluate UAV Runtime Verification Through Hardware-in-the-loop Simulation“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99041.
Der volle Inhalt der QuelleMaster of Science
Safety is one of the most crucial factors considered when designing an autonomous vehicle. Modern vehicles that use a machine learning-based control algorithm can have unpredictable behavior in real-world scenarios that were not anticipated while training the algorithm. Verifying the underlying software code with all possible scenarios is a difficult task. Runtime verification is an efficient solution where a relatively simple set of monitors validate the decisions made by the sophisticated control software against a set of predefined rules. If the monitors detect an erroneous behavior, they initiate a predetermined corrective action. Unmanned aerial vehicles (UAVs), like drones, are a class of autonomous vehicles that use complex software to control their flight. This thesis proposes a platform that allows the development and validation of monitors for UAVs using configurable hardware. The UAV is emulated on a high-fidelity simulator, thereby eliminating the time-consuming process of flying and validating monitors on a real UAV. The platform supports the implementation of multiple monitors that can execute in parallel. Scenarios to violate rules and cause the monitors to trigger corrective actions can easily be generated on the simulator.
Silva, Junior José Cláudio Vieira e. „Verificação de Projetos de Sistemas Embarcados através de Cossimulação Hardware/Software“. Universidade Federal da Paraíba, 2015. http://tede.biblioteca.ufpb.br:8080/handle/tede/7856.
Der volle Inhalt der QuelleMade available in DSpace on 2016-02-16T14:54:49Z (GMT). No. of bitstreams: 1 arquivovotal.pdf: 4473573 bytes, checksum: 152c2f0d263c50dcbea7d500d5f7f5da (MD5) Previous issue date: 2015-08-17
Este trabalho propõe um ambiente para verificação de sistemas embarcados heterogêneos através da cossimulação distribuída. A verificação ocorre de maneira síncrona entre o software do sistema e o sistema embarcado usando a High Level Architecture (HLA) como middeware. A novidade desta abordagem não é apenas fornecer suporte para simulações, mas também permitir a integração sincronizada com todos os dispositivos de hardware físico. Neste trabalho foi utilizado o Ptolemy como uma plataforma de simulação. A integração do HLA com Ptolemy e os modelos de hardware abre um vasto conjunto de aplicações, como o de teste de vários dispositivos ao mesmo tempo, executando os mesmos, ou diferentes aplicativos ou módulos, a execução de multiplos dispositivos embarcados para a melhoria de performance. Além disso a abordagem de utilização do HLA, permite que sejam interligados ao ambiente, qualquer tipo de robô, assim como qualquer outro simulador diferente do Ptolemy. Estudo de casos são apresentado para provar o conceito, mostrando a integração bem sucedida entre o Ptolemy e o HLA e a verificação de sistemas utilizando Hardware-in-the-loop e Robot-in-the-loop.
This work proposes an environment for verification of heterogeneous embedded systems through distributed co-simulation. The verification occurs in real-time co-simulating the system software and hardware platform using the High Level Architecture (HLA) as a middleware. The novelty of this approach is not only providing support for simulations, but also allowing the synchronous integration with any physical hardware devices. In this work we use the Ptolemy framework as a simulation platform. The integration of HLA with Ptolemy and the hardware models open a vast set of applications, like the test of many devices at the same time, running the same, or different applications or modules, the usage of Ptolemy for real-time control of embedded systems and the distributed execution of different embedded devices for performance improvement. Furthermore the use of HLA approach allows them to be connected to the environment, any type of robot, as well as any other Ptolemy simulations. Case studies are presented to prove the concept, showing the successful integration between Ptolemy and the HLA and verification systems using Hardware-in-the-loop and Robot-in-the-loop.
Dočekal, Martin. „HIL simulace manipulátorů nebo stroje“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-444291.
Der volle Inhalt der QuellePeters, Eduardo. „Coprocessador para aceleração de aplicações desenvolvidas utilizando paradigma orientado a notificações“. Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/325.
Der volle Inhalt der QuelleThis work presents a new hardware coprocessor to accelerate applications developed using the Notification-Oriented Paradigm (NOP). A NOP application has the advantages of both event-based programming and declarative programming, enabling higher level software development, improving code reuse, and reducing the number of unnecessary computations. Because a NOP application is composed of a network of small computational entities communicating only when needed, it is a good candidate for a direct hardware implementation. In order to investigate this assumption, a coprocessor that is able to run existing NOP applications was created. The coprocessor was developed in VHDL and tested in FPGAs, providing a decrease of 96% in the number of clock cycles compared to a purely software implementation.
González, Cortés Carlos Eduardo. „Diseño e implementación del software de vuelo para un nano-satélite tipo Cubesat“. Tesis, Universidad de Chile, 2013. http://www.repositorio.uchile.cl/handle/2250/115307.
Der volle Inhalt der QuelleEl estándar de nanosatélites Cubesat fue pensado para facilitar el desarrollo de pequeños proyectos espaciales con fines científicos y educacionales, a un bajo costo y en cortos periodos de tiempo. Siguiendo esta línea, la Facultad de Ciencias Físicas y Matemáticas de la Uni- versidad de Chile ha impulsado el proyecto SUCHAI, que consiste en implementar, poner en órbita y operar el primer satélite desarrollado por una universidad del país. El computador a bordo de la aeronave, que consiste un sistema embebido de limitada capacidad de cómputo, escasa memoria y bajo consumo de energía, debe ejecutar el software de vuelo que controlará sus operaciones una vez en órbita. El objetivo de este trabajo es el diseño e implementación de este software para el satélite SUCHAI, como una solución confiable, flexible y extensible que sea la base para futuras misiones aeroespaciales. El diseño del software consiste en una estructura de tres capas, que consigue dividir el problema convenientemente. La de más bajo nivel considera los controladores de hardware, la capa intermedia alberga al sistema operativo, y la de nivel superior, contiene los detalles de la aplicación requerida específicamente para este sistema. Para la arquitectura de la capa de aplicación, se estudia y aplica el concepto de patrón de diseño, en específico, se realiza una adaptación de command pattern. De esta manera, el satélite se concibe como un ejecutor de comandos genéricos y se obtiene una solución mantenible, modificable y extensible en el tiempo, mediante la programación de los comandos concretos que sean requeridos. La implementación se realiza sobre un PIC24F y considera controladores para los periféricos I2C, RS232 y SPI, así como para los subsistemas de radiocomunicaciones y energía. Se decide utilizar el sistema operativo FreeRTOS, como capa intermedia, lo que permite contar con el procesamiento concurrente de tareas, herramientas de temporización y sincronización. Se ha puesto especial énfasis en la implementación de la arquitectura planteada para la capa de aplicación, consiguiendo un software capaz de ejecutar una serie de comandos, programados para cumplir los requerimientos operacionales del proyecto, lo cual representa el método principal para extender sus funcionalidades y adecuarse a futuras misiones. Para probar y verificar el sistema desarrollado, se ha utilizado la técnica denominada hardware on the loop simulation. Se han obteniendo datos de funcionamiento, bajo condiciones de operación hipotéticas, a través del registro generado por la consola serial. Con esto se verifican los requerimientos operacionales de la misión, con resultados exitosos, obteniendo el sistema base y funcional del satélite. Como trabajo futuro, se utilizará este software para integrar el resto de los sistemas del satélite SUCHAI, demostrando su capacidad de adaptación y extensión, en un paso previo a la prueba final: funcionar adecuadamente en el espacio exterior.
Deicke, Markus. „Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten“. Doctoral thesis, Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230123.
Der volle Inhalt der QuelleThe constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development
Cemin, Paulo Roberto. „Plataforma de medição de consumo para comparação entre software e hardware em projetos energeticamente eficientes“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1310.
Der volle Inhalt der QuelleThe large number of mobile devices increased the interest in low-power designs. Tools that allow the evaluation of alternative implementations give the designer actionable information to create energy-efficient designs. This paper presents a new power measurement platform able to compare the energy consumption of different algorithms implemented in software and in hardware. The proposed platform is able to measure the energy consumption of a specific process running in a general-purpose CPU with a standard operating system, and to compare the results with equivalent algorithms running in an FPGA. This allows the designer to choose the most energy-efficient software vs. hardware partitioning for a given application. Compared with the current state-of-the-art, the presented platform has four distinguishing features: (i) support for both software and hardware power measurements, (ii) measurement of individual code sections in the CPU, (iii) support for dynamic clock frequencies, and (iv) improvement of measurement precision. We also demonstrate how the developed platform has been used to analyze the energy consumption of network intrusion detection algorithms aimed at detecting probing attacks.
Rudraiah, Dakshinamurthy Amruth. „A Compiler-based Framework for Automatic Extraction of Program Skeletons for Exascale Hardware/Software Co-design“. Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5695.
Der volle Inhalt der QuelleM.S.
Masters
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Ashby, Ryan Michael. „Hardware in the Loop Simulation of a Heavy Truck Braking System and Vehicle Control System Design“. The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366046155.
Der volle Inhalt der QuelleDeicke, Markus. „Virtuelle Absicherung von Steuergeräte-Software mit hardwareabhängigen Komponenten“. Universitätsverlag Chemnitz, 2016. https://monarch.qucosa.de/id/qucosa%3A20810.
Der volle Inhalt der QuelleThe constantly increasing amount of functions in modern automobiles and the growing degree of cross-linking between electronic control units (ECU) require new methods to master the complexity in the validation and verification process. The virtual validation and verification enables the integration of the software on a PC system, which is independent from the target hardware, to guarantee the required software quality in the early development stages. Furthermore, the software reuse in future microcontrollers can be verified. All this is enabled by the AUTOSAR standard which provides consistent interface descriptions to allow the abstraction of hardware and software. However, the standard contains hardware-dependent components, called complex device drivers (CDD). Those CDDs cannot be directly integrated into a platform for virtual verification, because they require a specific hardware which is not generally available on such a platform. Regardless, CDDs are an essential part of the ECU software and therefore need to be considered in an holistic approach for validation and verification. This thesis describes seven different concepts to include CDDs in the virtual verification process. A method to always choose the optimal solution for all use cases of CDDs in ECU software is developed using an evaluation of the suitably for daily use of all concepts. As a result from this method, the two concepts suited for the most frequent use cases are detailed and developed as prototypes in this thesis. The first concept enables the full simulation of a CDD. This is necessary to allow the integration of the functional software itself without the driver. This way all interfaces can be tested even if the CDD is not available. The complete automation of the generation of the simulation makes the process very efficient. With the second concept a CDD can be entirely integrated into a platform for virtual verification, using an hardware abstraction layer to connect the hardware interfaces to the available hardware of the platform. This way, the driver is able to control real hardware components and can be tested completely. A flexible configuration of the abstraction layer allows the application of the concept for a wide variety of CDDs. In this thesis both concepts are tested and evaluated using genuine projects from series development.
Tuncali, Cumhur Erkan. „Implementation And Simulation Of Mc68hc11 Microcontroller Unit Using Systemc For Co-design Studies“. Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12609177/index.pdf.
Der volle Inhalt der QuellePieper, Tobias [Verfasser], und Roman [Gutachter] Obermaisser. „Distributed co-simulation framework for hardware- and software-in-the-loop testing of networked embedded real-time systems / Tobias Pieper ; Gutachter: Roman Obermaisser“. Siegen : Universitätsbibliothek der Universität Siegen, 2020. http://d-nb.info/1220506214/34.
Der volle Inhalt der QuelleOselame, Gleidson Brandão. „Desenvolvimento de software e hardware para diagnóstico e acompanhamento de lesões dermatológicas suspeitas para câncer de pele“. Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/973.
Der volle Inhalt der QuelleCancer is responsible for about 7 million deaths annually worldwide. It is estimated that 25% of all cancers are skin, and in Brazil the most frequent in all geographic regions type. Among them, the melanoma type, accounting for 4% of skin cancers, whose incidence has doubled worldwide in the past decade. Among the diagnostic methods employed, it is cited ABCD rule which considers asymmetry (A), edges (B), color (C) and diameter (D) stains or nevi. The digital image processing has shown good potential to aid in early diagnosis of melanoma. In this sense, the objective of this study was to develop software in MATLAB® platform, associated with hardware to standardize image acquisition aiming at performing the diagnosis and monitoring of suspected malignancy (melanoma) skin lesions. Was used as the ABCD rule for guiding the development of methods of computational analysis. We used MATLAB as a programming environment for the development of software for digital image processing. The images used were acquired two banks pictures free access. Images of melanomas (n = 15) and pictures nevi (not cancer) (n = 15) were included. We used the image in RGB color channel, which were converted to grayscale, application of 8x8 median filter and approximation technique for 3x3 neighborhood. After we preceded binarization and reversing black and white for subsequent feature extraction contours of the lesion. For the standardized image acquisition was developed a prototype hardware, which was not used in this study (that used with enclosed diagnostic images of image banks), but has been validated for evaluation of lesion diameter (D). We used descriptive statistics where the groups were subjected to non-parametric test for two independent samples Mann-Whitney U test yet, to evaluate the sensitivity (SE) and specificity (SP) of each variable, we used the ROC curve. The classifier used was an artificial neural network with radial basis function, obtaining diagnostic accuracy for melanoma images and 100% for images not cancer of 90.9%. Thus, the overall diagnostic accuracy for prediction was 95.5%. Regarding the SE and SP of the proposed method, obtained an area under the ROC curve of 0.967, which suggests an excellent diagnostic ability to predict, especially with low costs, since the software can be run in most systems operational use today.
Zvonček, Radovan. „Knihovna procesorů pro návrh vestavěných systémů“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412853.
Der volle Inhalt der QuelleKronbauer, Fernando André. „Memorias transacionais : prototipagem e simulação de implementações em hardware e uma caracterização para o problema de gerenciamento de contenção em software“. [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276161.
Der volle Inhalt der QuelleDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-13T10:38:16Z (GMT). No. of bitstreams: 1 Kronbauer_FernandoAndre_M.pdf: 3637569 bytes, checksum: 4c5752e2ae7f853d3b5f4971d6d7cbab (MD5) Previous issue date: 2009
Resumo: Enquanto que arquiteturas paralelas vão se tornando cada vez mais comuns na indústria de computação mais e mais programadores precisam escrever programas paralelos e desta forma são expostos aos problemas relacionados ao uso dos mecanismos tradicionais de controle de concorrência. Memórias transacionais têm sido propostas como um meio de aliviar as dificuldades encontradas ao escreverem-se programas paralelos: o desenvolvedor precisa apenas marcar as seções de código que devem ser executadas de forma atômica e isolada - na forma de transações, e o sistema cuida dos detalhes de sincronização. Neste trabalho exploramos propostas de memórias transacionais com suporte específico em hardware (HTM), desenvolvendo uma plataforma flexível para a prototipagem, simulação e caracterização destes sistemas. Também exploramos um sistema de memória transacional com suporte apenas em software (STM), apresentando uma abordagem nova para gerenciar a contenção entre transações. Esta abordagem leva em consideração os padrões de acesso aos diferentes dados de um programa ao escolher o gerenciador de contenção a ser usado para o acesso a estes dados. Elaboramos uma modificação da plataforma de STM que nos permite realizar esta associação entre dados e gerenciamento de contenção, e a partir desta implementação realizamos uma caracterização baseada nos padrões de acesso aos dados de um programa executando em diferentes sistemas de computação. Os resultados de nosso trabalho mostram a viabilidade do uso de memórias transacionais em um ambiente de pesquisa acadêmica, e apontam caminhos para a realização de trabalhos futuros que aumentem a viabilidade do seu uso também pela indústria.
Abstract: As parallel architectures become prevalent in the computer industry, more and more programmers are required to write parallel programs and are thus being exposed to the problems related to the use of traditional mechanisms for concurrency control. Transactional memory has been devised as a means for easing the burden of writing parallel Programs: the programmer has only to mark the sections of code that are to be executed in an atomic and isolated way - in the form of transactions, and the system takes care of the synchronization details. In this work we explore different proposals of transactional memories based on specific hardware support (HTM), developing a flexible platform for the prototyping, simulation and characterization of these systems. We also explore a transactional memory system based solely on software support (STM), devising a novel approach for managing the contention among transactions. This new approach takes into account access patterns to different data in an application when choosing the contention management strategy to be used for the access to these data. We made modifications to the STM system in order to enable the association of the data with the contention manager algorithm, and using the new implementation we characterized the STM system based on the access patterns to the data of a program, running it on different hardware. Our results show the viability of the use of transactional memories in an academic environment, and serve as a basis for the proposal of different directions to be followed in future research work, aimed at leveraging the use of transactional memories by the industry.
Mestrado
Mestre em Ciência da Computação
Silva, Hilgad Montelo da. „Simulação com hardware in the loop aplicada a veículos submarinos semi-autônomos“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-09022009-164239/.
Der volle Inhalt der QuelleUnmanned Underwater Vehicles (UUVs) have many commercial, military, and scientific applications because of their potential capabilities and significant costperformance improvements over traditional means of obtaining valuable underwater information The development of a reliable sampling and testing platform for these vehicles requires a thorough system design and many costly at-sea trials during which systems specifications can be validated. Modeling and simulation provide a cost-effective measure to carry out preliminary component, system (hardware and software), and mission testing and verification, thereby reducing the number of potential failures in at-sea trials. An accurate simulation environment can help engineers to find hidden errors in the UUV embedded software and gain insights into the UUV operation and dynamics. This work describes the implementation of a UUV\'s control algorithm using MATLAB/SIMULINK, its automatic conversion to an executable code (in C++) and the verification of its performance directly into the embedded computer using simulations. It is detailed the necessary procedure to allow the conversion of the models from MATLAB to C++ code, integration of the control software with the real time operating system used on the embedded computer (VxWORKS) and the developed strategy of Hardware in the loop Simulation (HILS). The Main contribution of this work is to present a rational framework to support the final implementation of the control software on the embedded computer, starting from the model developed on an environment friendly to the control engineers, like SIMULINK.
de, Graaf Niels. „Simulation of Attitude and Orbit Control for APEX CubeSat“. Thesis, Luleå tekniska universitet, Rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-80736.
Der volle Inhalt der QuelleRyd, Jonatan, und Jeffrey Persson. „Development of a pipeline to allow continuous development of software onto hardware : Implementation on a Raspberry Pi to simulate a physical pedal using the Hardware In the Loop method“. Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296952.
Der volle Inhalt der QuelleSaab vill undersöka metoden Hardware In the Loop som ett koncept, dessutom hur en infrastruktur av Hardware In the Loop skulle se ut. Hardware In the Loop baseras på att kontinuerligt testa hårdvara som är simulerad. Mjukvaran Saab vill använda sig av för Hardware In the Loop metoden är Jenkins, vilket är ett Continuous Integration och Continuous Delivery verktyg. För attsimulera hårdvaran vill Saab undersöka användningen av ett Application Programming Interface mellan en Raspberry Pi och programmeringsspråket Robot Framework. Anledning till att Saab vill undersöka allt det här, är för att de tror att det kan förbättra frekvensen av testning och kvaliteten av testning, vilket skulle leda till en förbättring av deras produkter. Teorin bakom Hardware In the Loop, Continuous Integration och Continuous Delivery kommer att förklaras i den här rapporten. Hardware In the Loop metoden blev implementerad med Continuous Integration och Continuous Delivery verktyget Jenkins. Ett Application Programming Interface mellan General Purpose Input/output pinnarna på en Raspberry Pi och Robot Framework blev utvecklat. Med de här implementationerna utförda, så blev Hardware Inthe Loop metoden slutligen integrerat, där Raspberry Pis användes för att simulera hårdvaran.
Alluri, Veerendra Bhargav. „MULTIPLE CHANNEL COHERENT AMPLITUDE MODULATED (AM) TIME DIVISION MULTIPLEXING (TDM) SOFTWARE DEFINED RADIO (SDR) RECEIVER“. UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/499.
Der volle Inhalt der QuelleHaffar, Mohamad. „Développement d'une plateforme de co-simulation en vue de validation et d'évaluation de performances des systèmes de communication pour les installations de distribution électriques“. Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT043.
Der volle Inhalt der QuelleFrom 2004, a new worldwide standard of communication IEC61850 is introduced in the majority of substation automation system carrying out new innovation prospects to the world of substation. One of these feature is that it allows the exchange of security real time communication messages all over the communication network. These messages are used as control information for the Distributed Automation Application 'DAA'. Taking into consideration that DAA have a direct effect on ythe dependability of a smart grid architecture, the fiability of these real time IEC 61850 should be evaluated. For these reasons, our research delas with the development of a Co-Simulation platform that permits the evaluation and validation of an IEC 61850 communication network
Brink, Michael Joseph. „Hardware-in-the-loop simulation of pressurized water reactor steam-generator water-level control, designed for use within physically distributed testing environments“. The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357273230.
Der volle Inhalt der QuelleFrança, André Luiz Pereira de. „Estudo, desenvolvimento e implementação de algoritmos de aprendizagem de máquina, em software e hardware, para detecção de intrusão de rede: uma análise de eficiência energética“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1166.
Der volle Inhalt der QuelleO constante aumento na velocidade da rede, o número de ataques e a necessidade de eficiência energética estão fazendo com que a segurança de rede baseada em software chegue ao seu limite. Um tipo comum de ameaça são os ataques do tipo probing, nos quais um atacante procura vulnerabilidades a partir do envio de pacotes de sondagem a uma máquina-alvo. Este trabalho apresenta o estudo, o desenvolvimento e a implementação de um algoritmo de extração de características dos pacotes da rede em hardware e de três classificadores de aprendizagem de máquina (Árvore de Decisão, Naive Bayes e k-vizinhos mais próximos), em software e hardware, para a detecção de ataques do tipo probing. O trabalho apresenta, ainda resultados detalhados de acurácia de classificação, taxa de transferência e consumo de energia para cada implementação.
The increasing network speeds, number of attacks, and need for energy efficiency are pushing software-based network security to its limits. A common kind of threat is probing attacks, in which an attacker tries to find vulnerabilities by sending a series of probe packets to a target machine. This work presents the study, development, and implementation of a network packets feature extraction algorithm in hardware and three machine learning classifiers (Decision Tree, Naive Bayes, and k-nearest neighbors), in software and hardware, for the detection of probing attacks. The work also presents detailed results of classification accuracy, throughput, and energy consumption for each implementation.
Palm, Johan. „High Performance FPGA-Based Computation and Simulation for MIMO Measurement and Control Systems“. Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7477.
Der volle Inhalt der QuelleThe Stressometer system is a measurement and control system used in cold rolling to improve the flatness of a metal strip. In order to achieve this goal the system employs a multiple input multiple output (MIMO) control system that has a considerable number of sensors and actuators. As a consequence the computational load on the Stressometer control system becomes very high if too advance functions are used. Simultaneously advances in rolling mill mechanical design makes it necessary to implement more complex functions in order for the Stressometer system to stay competitive. Most industrial players in this market considers improved computational power, for measurement, control and modeling applications, to be a key competitive factor. Accordingly there is a need to improve the computational power of the Stressometer system. Several different approaches towards this objective have been identified, e.g. exploiting hardware parallelism in modern general purpose and graphics processors.
Another approach is to implement different applications in FPGA-based hardware, either tailored to a specific problem or as a part of hardware/software co-design. Through the use of a hardware/software co-design approach the efficiency of the Stressometer system can be increased, lowering overall demand for processing power since the available resources can be exploited more fully. Hardware accelerated platforms can be used to increase the computational power of the Stressometer control system without the need for major changes in the existing hardware. Thus hardware upgrades can be as simple as connecting a cable to an accelerator platform while hardware/software co-design is used to find a suitable hardware/software partition, moving applications between software and hardware.
In order to determine whether this hardware/software co-design approach is realistic or not, the feasibility of implementing simulator, computational and control applications in FPGAbased hardware needs to be determined. This is accomplished by selecting two specific applications for a closer study, determining the feasibility of implementing a Stressometer measuring roll simulator and a parallel Cholesky algorithm in FPGA-based hardware.
Based on these studies this work has determined that the FPGA device technology is perfectly suitable for implementing both simulator and computational applications. The Stressometer measuring roll simulator was able to approximate the force and pulse signals of the Stressometer measuring roll at a relative modest resource consumption, only consuming 1747 slices and eight DSP slices. This while the parallel FPGA-based Cholesky component is able to provide performance in the range of GFLOP/s, exceeding the performance of the personal computer used for comparison in several simulations, although at a very high resource consumption. The result of this thesis, based on the two feasibility studies, indicates that it is possible to increase the processing power of the Stressometer control system using the FPGA device technology.
Vlach, Jan. „Algoritmy souběžného technického a programového návrhu“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412761.
Der volle Inhalt der QuelleNjoyah, ntafam Perrin. „Méthodologie d'identification et d'évitement des cycles de gel du processeur pour l'optimisation de la performance du logiciel sur le matériel“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM021/document.
Der volle Inhalt der QuelleOne of microelectronics purposes is to design and manufacture small-sized, low-cost SoCs targeting markets such as the Internet of Things. With fixed hardware on which there is no possible flexibility, one of the challenges for an embedded software developer is to write his program so that, at runtime, the software developed can make the best use of these SoC capabilities. However, these programs do not always properly use the available SoC processing capabilities. Software performance estimation and optimization is then a crucial activity. At runtime, these programs are very often victims of processor data stall cycles. There are several approaches to avoiding these processor data stall cycles. For example, using the appropriate compilation options to generate the best executable code. However, the compilers have only an abstract knowledge (as analytical formulas) of the hardware architecture on which the software will be executed. Another way of solving this issue is to use Out-Of- Order processors. But these processors are very expensive in terms of manufacturing cost because they require a large silicon surface for the implementation of the Out-Of-Order mechanism. In this thesis, we propose an iterative methodology based on cycle accurate virtual platforms, which helps identifying precisely instructions of the program which are responsible of the generation of processor data stall cycles. The goal is to provide the developer with clues on the source code lignes of his program’s in high level language (C/C++ typically) which are responsible of these stalls. For each instructions, we provide their contribution to lengthening of the total program execution time. Finally, we estimate the maximum potential gain that can be achieved if all identified stall cycles are avoided by manually inserting software preloading instructions into the source code of the program to optimize
Rakotozafy, Andriamaharavo. „Simulation temps réel de dispositifs électrotechniques“. Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0385/document.
Der volle Inhalt der QuelleIndustrial controllers are always subjected to parameters change, modifications and permanent improvements. They have to follow off-the-shelf technologies as well as hardware than software (libraries, operating system, control regulations ...). Apart from these primary necessities, additional aspects concerning the system operation that includes sequential, protections, human machine interface and system stability have to be implemented and interfaced correctly. In addition, these functions should be generically structured to be used in common for wide range of applications. All modifications (hardware or software) even slight ones are risky. In the absence of a prior validation system, these modifications are potentially a source of system instability or damage. On-site debugging and modification are not only extremely expensive but can be highly risky, cumulate expenditure and reduce productivity. This concerns all major industrial applications, Oil & Gas installations and Marine applications. Working conditions are difficult and the amount of tests that can be done is strictly limited to the mandatory ones. This thesis proposes two levels of industrial controller validation which can be done in experimental test platform : an algorithm validation level called Software In the Loop (SIL) treated in the second chapter ; a physical hardware validation called Hardware In the Loop (HIL) treated in the third chapter. The SIL validates only the control algorithm, the control law and the computed references without taking into account neither the actual physical commands nor the physical input feedbacks managed by the Input/Output boards. SIL validation of the system where industrial asynchronous motor is fed and regulated by a three level Variable Speed Drive with a three level voltage source converter is treated in the second chapter with a particular modeling approach adapted to such validation. The last chapter presents the HIL validation with various hardware implementations (Field Programmable Gate Array (FPGA), processors). Such validation checks both the control algorithm and the actual physical Input/Output signals generated by the dedicated boards. Each time, the modeling approach is chosen according to the hardware implementation. Currently this work has contributed to the system validation used by General Electric - Power Conversion © (GE-PC) as part of their validation phase that is mandatory for Oil & Gas projects and Marine applications
Kekely, Lukáš. „Hardwarová akcelerace aplikací pro monitorování a bezpečnost vysokorychlostních sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236345.
Der volle Inhalt der QuelleGoyal, Sachin. „Power network in the loop : subsystem testing using a switching amplifier“. Queensland University of Technology, 2009. http://eprints.qut.edu.au/26521/.
Der volle Inhalt der QuelleKing, Jonathan Charles. „Model-Based Design of a Plug-In Hybrid Electric Vehicle Control Strategy“. Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/34962.
Der volle Inhalt der QuelleMaster of Science
Du, Wan. „Modélisation et simulation de réseaux de capteurs sans fil“. Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00690466.
Der volle Inhalt der QuelleMekala, Priyanka. „Field Programmable Gate Array Based Target Detection and Gesture Recognition“. FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/723.
Der volle Inhalt der QuelleZeffer, Håkan. „Towards Low-Complexity Scalable Shared-Memory Architectures“. Doctoral thesis, Uppsala University, Department of Information Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7135.
Der volle Inhalt der QuellePlentiful research has addressed low-complexity software-based shared-memory systems since the idea was first introduced more than two decades ago. However, software-coherent systems have not been very successful in the commercial marketplace. We believe there are two main reasons for this: lack of performance and/or lack of binary compatibility.
This thesis studies multiple aspects of how to design future binary-compatible high-performance scalable shared-memory servers while keeping the hardware complexity at a minimum. It starts with a software-based distributed shared-memory system relying on no specific hardware support and gradually moves towards architectures with simple hardware support.
The evaluation is made in a modern chip-multiprocessor environment with both high-performance compute workloads and commercial applications. It shows that implementing the coherence-violation detection in hardware while solving the interchip coherence in software allows for high-performing binary-compatible systems with very low hardware complexity. Our second-generation hardware-software hybrid performs on par with, and often better than, traditional hardware-only designs.
Based on our results, we conclude that it is not only possible to design simple systems while maintaining performance and the binary-compatibility envelope, it is often possible to get better performance than in traditional and more complex designs.
We also explore two new techniques for evaluating a new shared-memory design throughout this work: adjustable simulation fidelity and statistical multiprocessor cache modeling.
Kreku, J. (Jari). „Early-phase performance evaluation of computer systems using workload models and SystemC“. Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299902.
Der volle Inhalt der QuelleTiivistelmä Sulautettujen tietokonejärjestelmien suorituskyvyn arviointi muuttuu yhä haastavammaksi järjestelmien kasvavan kompleksisuuden vuoksi. Järjestelmissä on suuri määrä sovelluksia, jotka tarjoavat käyttäjälle palveluita liittyen esimerkiksi telekommunikaatioon, äänen ja videokuvan toistoon, internet-selaukseen ja navigaatioon. Tästä johtuen suoritusalustoilta edellytetään yhä enemmän joustavuutta, skaalautuvuutta ja modulaarisuutta. Suoritusarkkitehtuurit kehittyvät nykyisistä System-on-Chip (SoC) -ratkaisuista Network-on-Chip (NoC) -rinnakkaistietokoneiksi, jotka koostuvat heterogeenisistä alijärjestelmistä. Sovellusten ja suoritusalustan muodostaman järjestelmän suorituskyvyn arviointiin tarvitaan uusia menetelmiä ja työkaluja, joilla kompleksisuutta voidaan hallita. Tässä väitöskirjassa esitettävä ABSOLUT-simulointimenetelmä pienentää suorituskyvyn arvioinnin kompleksisuutta abstrahoimalla sovelluksen toiminnallisuutta työkuormamalleilla, jotka koostuvat kuormaprimitiiveistä suorittimen käskyjen sijaan. Työkuormamalleja voidaan luoda sovellusten spesifikaatioista, mittaustuloksista, suoritusjäljistä tai sovellusten lähdekoodeista. Suoritusalustoista ABSOLUT-menetelmä käyttää yksinkertaisia kapasiteettimalleja toiminnallisten mallien sijaan: suoritinarkkitehtuurit mallinnetaan korkealla tasolla ja tiedonsiirto ja tiedon varastointi mallinnetaan vain suorituskyvyn näkökulmasta. Menetelmä mahdollistaa aikaisen suorituskyvyn arvioinnin, koska malleja voidaan luoda ja simuloida jo ennen valmiin sovelluksen tai suoritusalustan olemassaoloa. ABSOLUT-menetelmää on käytetty useissa erilaisissa kokeiluissa, jotka sisälsivät esimerkiksi matkapuhelimen käyttöä, äänen ja videokuvan toistoa ja tallennusta, 3D-pelin pelaamista ja digitaalista tiedonsiirtoa. Esimerkeissä käytetiin tyypillisiä suoritusalustoja sekä kotitietokoneiden että sulautettujen järjestelmien maailmasta. Lisäksi osa esimerkeistä pohjautui tuleviin tai keksittyihin suoritusalustoihin. Osa simuloinneista on varmennettu vertaamalla simulointituloksia todellisista järjestelmistä saatuihin mittaustuloksiin. Niiden välillä huomattiin keskimäärin 12 prosentin poikkeama, mikä ylittää aikaisen vaiheen suorituskyvyn simulointimenetelmiltä vaadittavan tarkkuuden
Manning, Peter Christopher. „Development of a Series Parallel Energy Management Strategy for Charge Sustaining PHEV Operation“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/49436.
Der volle Inhalt der QuelleMaster of Science
Lövgren, Simon. „Simulating Energy-Efficient Hardware The Software Out-of-order Processor“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332801.
Der volle Inhalt der QuelleZhang, Jingyao. „Hardware-Software Co-Design for Sensor Nodes in Wireless Networks“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/50972.
Der volle Inhalt der QuelleSUNSHINE is the first sensornet simulator that effectively supports joint evaluation and design of sensor hardware and software performance in a networked context. SUNSHINE captures the performance of network protocols, software and hardware up to cycle-level accuracy through its seamless integration of three existing sensornet simulators: a network simulator TOSSIM, an instruction-set simulator SimulAVR and a hardware simulator
GEZEL. SUNSHINE solves several sensornet simulation challenges, including data exchanges and time synchronization across different simulation domains and simulation accuracy levels. SUNSHINE also provides hardware specification scheme for simulating flexible and customized hardware designs. Several experiments are given to illustrate SUNSHINE\'s simulation capability. Evaluation results are provided to demonstrate that SUNSHINE is an efficient tool for software-hardware co-design in sensornet research.
Even though SUNSHINE can simulate flexible sensor nodes (nodes contain FPGA chips as coprocessors) in wireless networks, it does not estimate power/energy consumption of sensor nodes. So far, no simulators have been developed to evaluate the performance of such flexible nodes in wireless networks. In second section, we present PowerSUNSHINE, a power- and energy-estimation tool that fills the void. PowerSUNSHINE is the first scalable power/energy estimation tool for WSNs that provides an accurate prediction for both fixed and flexible sensor nodes. In the section, we first describe requirements and challenges of building PowerSUNSHINE. Then, we present power/energy models for both fixed and flexible sensor nodes. Two testbeds, a MicaZ platform and a flexible node consisting of a microcontroller, a radio and a FPGA based co-processor, are provided to demonstrate the simulation fidelity of PowerSUNSHINE. We also discuss several evaluation results based on simulation and testbeds to show that PowerSUNSHINE is a scalable simulation tool that provides accurate estimation of power/energy consumption for both fixed and flexible sensor nodes.
Since the main components of sensor nodes include a microcontroller and a wireless transceiver (radio), their real-time performance may be a bottleneck when executing computation-intensive tasks in sensor networks. A coprocessor can alleviate the burden of microcontroller from multiple tasks and hence decrease the probability of dropping packets from wireless channel. Even though adding a coprocessor would gain benefits for sensor networks, designing applications for sensor nodes with coprocessors from scratch is challenging due to the consideration of design details in multiple domains, including software, hardware, and network. To solve this problem, we propose a hardware-software co-design framework for network applications that contain multiprocessor sensor nodes. The framework includes a three-layered architecture for multiprocessor sensor nodes and application interfaces under the framework. The layered architecture is to make the design of multiprocessor nodes\' applications flexible and efficient. The application interfaces under the framework are implemented for deploying reliable applications of multiprocessor sensor nodes. Resource sharing technique is provided to make processor, coprocessor and radio work coordinately via communication bus. Several testbeds containing multiprocessor sensor nodes are deployed to evaluate the effectiveness of our framework. Network experiments are executed in SUNSHINE emulator to demonstrate the benefits of using multiprocessor sensor nodes in many network scenarios.
Ph. D.
Tjerngren, Jon. „Modeling and Hardware-in-the-loop Simulations of Contactor Dynamics : Mechanics, Electromagnetics and Software“. Thesis, Linköpings universitet, Institutionen för systemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-107744.
Der volle Inhalt der QuelleCunningham, Larry E. „A Programmable PCM Data Simulator for Microcomputer Hosts“. International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613390.
Der volle Inhalt der QuelleModem microcomputers are proving to be viable hosts for telemetry functions, including data simulators. A specialized high-performance hardware architecture for generating and processing simulator data can be implemented on an add-in card for the microcomputer. Support software implemented on the host provides a simple, high-quality human interface with a high degree of user programmability. Based on this strategy, the Physical Science Laboratory at New Mexico State University (PSL) is developing a Programmable PCM Data Simulator for microcomputer hosts. Specifications and hardware/software architectures for PSL’s Programmable PCM Data Simulator are discussed, as well as its interactive user interface.