Dissertations / Theses on the topic 'Progettistica hardware e software'

To see the other types of publications on this topic, follow the link: Progettistica hardware e software.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Progettistica hardware e software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Taylor, Ramsay G. "Verification of hardware dependent software." Thesis, University of Sheffield, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.575744.

Full text
Abstract:
Many good processes exist for ensuring the integrity of software systems, Some are analysis processes that seek to confirm that cer- tain properties hold for the system, and these rely on the ability to infer a correct model of the behaviour of the software, To ensure that such inference is possible many high-integrity systems are writ- ten in "safe" language subsets that restrict the program to constructs whose behaviour is sufficiently abstract and well defined that it can be determined independent of the execution environment. This nec- essarily prevents any assumptions about the system hardware. but consequently makes it impossible to use these techniques on software that must interact with the hardware. such as device drivers. This thesis addresses this shortcoming by taking the opposite approach: if the analyst accepts absolute hardware dependence - that the analysis will only be valid for a particular target system: the hardware that the driver is intended to control -- then the specifica- tion of the system can be used to infer the behaviour of the software that interacts with it, An analysis process is developed that operates on disassembled executable files and formal system specifications to produce CSP-OZ formal models of the software's behaviour, This analysis process is implemented in a prototype called Spurinna. that is then used in conjunction with the verification tools Z2SAL, the SAL suite, and IsabelleHOL. to demonstrate the verification of prop- erties of the software.
APA, Harvard, Vancouver, ISO, and other styles
2

Hilton, Adrian J. "High integrity hardware-software codesign." Thesis, Open University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Edmison, Joshua Nathaniel. "Hardware Architectures for Software Security." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29244.

Full text
Abstract:
The need for hardware-based software protection stems primarily from the increasing value of software coupled with the inability to trust software that utilizes or manages shared resources. By correctly utilizing security functions in hardware, trust can be removed from software. Existing hardware-based software protection solutions generally suffer from utilization of trusted software, lack of implementation, and/or extreme measures such as processor redesign. In contrast, the research outlined in this document proposes that substantial, hardware-based software protection can be achieved, without trusting software or redesigning the processor, by augmenting existing processors with security management hardware placed outside of the processor boundary. Benefits of this approach include the ability to add security features to nearly any processor, update security features without redesigning the processor, and provide maximum transparency to the software development and distribution processes. The major contributions of this research include the the augmentation methodology, design principles, and a graph-based method for analyzing hardware-based security systems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Blaha, Vít. "Hardware a software inteligentního spotřebiče." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221136.

Full text
Abstract:
Nowadays, the interest in smart appliances, which enable consumption reduction or consumption shifting approach, grows up. Such appliances can react to actual situation in the distributional network. From the energy distributor point of view, the activity of these appliances brings improvement of stability in the distribution network, while for the end customer there is possibility of the saving money. This thesis describes a transformation of standard fridge to smart fridge controlled by microcomputer Raspberry Pi. The smart fridge can communicate with supervisor system and according to its instructions change its behavior (temperature set point). The appliance can be manually controlled by a group of buttons, while its state can be visualized on the alphanumeric display. Last but not least way to control the appliance is through a web interface. The thesis also describes design of printed circuit board (PCB), which is designed for connection of all necessary sensors and actuators to Raspberry Pi. Software equipment is designed in the C++ program language.
APA, Harvard, Vancouver, ISO, and other styles
5

Figueiredo, Boneti Carlos Santieri de. "Exploring coordinated software and hardware support for hardware resource allocation." Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6018.

Full text
Abstract:
Multithreaded processors are now common in the industry as they offer high performance at a low cost. Traditionally, in such processors, the assignation of hardware resources between the multiple threads is done implicitly, by the hardware policies. However, a new class of multithreaded hardware allows the explicit allocation of resources to be controlled or biased by the software. Currently, there is little or no coordination between the allocation of resources done by the hardware and the prioritization of tasks done by the software.
This thesis targets to narrow the gap between the software and the hardware, with respect to the hardware resource allocation, by proposing a new explicit resource allocation hardware mechanism and novel schedulers that use the currently available hardware resource allocation mechanisms.
It approaches the problem in two different types of computing systems: on the high performance computing domain, we characterize the first processor to present a mechanism that allows the software to bias the allocation hardware resources, the IBM POWER5. In addition, we propose the use of hardware resource allocation as a way to balance high performance computing applications. Finally, we propose two new scheduling mechanisms that are able to transparently and successfully balance applications in real systems using the hardware resource allocation. On the soft real-time domain, we propose a hardware extension to the existing explicit resource allocation hardware and, in addition, two software schedulers that use the explicit allocation hardware to improve the schedulability of tasks in a soft real-time system.
In this thesis, we demonstrate that system performance improves by making the software aware of the mechanisms to control the amount of resources given to each running thread. In particular, for the high performance computing domain, we show that it is possible to decrease the execution time of MPI applications biasing the hardware resource assignation between threads. In addition, we show that it is possible to decrease the number of missed deadlines when scheduling tasks in a soft real-time SMT system.
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Per. "Hardware / Software co-design for JPEG2000." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5796.

Full text
Abstract:

For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.

This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.

First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.

APA, Harvard, Vancouver, ISO, and other styles
7

Endresen, Vegard Haugen. "Hardware-software intercommunication in reconfigurable systems." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10762.

Full text
Abstract:

In this thesis hardware-software intercommunication in a reconfigurable system has been investigated based on a framework for run time reconfiguration. The goal has been to develop a fast and flexible link between applications running on an embedded processor and reconfigurable accelerator hardware in form of a Xilinx Virtex device. As a start the link was broken down into hardware and software components based on constraints from earlier work and a general literature search. A register architecture for reconfigurable modules, a reconfigurable interface and a backend bridge linking reconfigurable hardware with the system bus were identified as the main hardware components whereas device drivers and a hardware operating system were identified as software components. These components were developed in a bottom-up approach, then deployed, tested and evaluated. Synthesis and simulation results from this thesis suggest that a hybrid register architecture, a mix of shift based and addressable register architecture might be a good solution for a reconfigurable module. Such an architecture enables a reconfigurable interface with full duplex capability with an initially small area overhead compared to a full scale RAM implementation. Although the hybrid architecture might not be very suitable for all types of reconfigurable modules it can be a nice compromise when attempting to achieve a uniform reconfigurable interface. Backend bridge solutions were developed assuming the above hybrid reconfigurable interface. Three main types were researched: a software register backend, a data cache backend and an instruction and data cache backend. Performance evaluation shows that the instruction and data cache outperforms the other two with an average acceleration ratio of roughly 5-10. Surprisingly the data cache backend performs worst of all due to latency ratios and design choices. Aside from the BRAM component required for the cache backends, resource consumption was shown to be only marginally larger than a traditional software register solution. Caching using a controller in the backend-bridge can thus provide good speedup for little cost as far as BRAM resources are not scarce. A software-to-hardware interface has been created has been created through Linux character device driver and a hardware operating system daemon. While the device drivers provide a middleware layer for hardware access the HWOS separates applications from system management through a message queue interface. Performance testing shows a large increase in delay when involving the Linux device drivers and the HWOS as compared to calls directly from the kernel. Although this is natural, the software components are very important when providing a high performance platform. As additional work specialized cell handling for reconfigurable modules has been addressed in the context of a MPEG-4 decoder. Some light has also been shed on design of reconfigurable modules in Xilinx ISE which can radically improve development time and decrease complexity compared to a Xilinx Platform Studio flow. In the process of demonstrating run time reconfigurations it was discovered that a clock signal will resist being piped through bus macros. Also broken functionality has been shown when applying run time reconfiguration to synchronous designs using the framework for self reconfiguration.

APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Yandong. "Hardware/Software Partitioning of Embedded Svstems." Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

King, Myron Decker. "A methodology for hardware-software codesign." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84891.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 150-156).
Special purpose hardware is vital to embedded systems as it can simultaneously improve performance while reducing power consumption. The integration of special purpose hardware into applications running in software is difficult for a number of reasons. Some of the difficulty is due to the difference between the models used to program hardware and software, but great effort is also required to coordinate the simultaneous execution of the application running on the microprocessor with the accelerated kernel(s) running in hardware. To further compound the problem, current design methodologies for embedded applications require an early determination of the design partitioning which allows hardware and software to be developed simultaneously, each adhering to a rigid interface contract. This approach is problematic because often a good hardware-software decomposition is not known until deep into the design process. Fixed interfaces and the burden of reimplementation prevent the migration of functionality motivated by repartitioning. This thesis presents a two-part solution to the integration of special purpose hardware into applications running in software. The first part addresses the problem of generating infrastructure for hardware-accelerated applications. We present a methodology in which the application is represented as a dataflow graph and the computation at each node is specified for execution either in software or as specialized hardware using the programmer's language of choice. An interface compiler as been implemented which takes as input the FIFO edges of the graph and generates code to connect all the different parts of the program, including those which communicate across the hardware/software boundary. This methodology, which we demonstrate on an FPGA platform, enables programmers to effectively exploit hardware acceleration without ever leaving the application space. The second part of this thesis presents an implementation of the Bluespec Codesign Language (BCL) to address the difficulty of experimenting with hardware/software partitioning alternatives. Based on guarded atomic actions, BCL can be used to specify both hardware and low-level software. Based on Bluespec SystemVerilog (BSV) for which a hardware compiler by Bluespec Inc. is commercially available, BCL has been augmented with extensions to support more efficient software generation. In BCL, the programmer specifies the entire design, including the partitioning, allowing the compiler to synthesize efficient software and hardware, along with transactors for communication between the partitions. The benefit of using a single language to express the entire design is that a programmer can easily experiment with many different hardware/software decompositions without needing to re-write the application code. Used together, the BCL and interface compilers represent a comprehensive solution to the task of integrating specialized hardware into an application.
by Myron King.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Nagaonkar, Yajuvendra. "FPGA-based Experiment Platform for Hardware-Software Codesign and Hardware Emulation." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1294.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bales, Jason M. "Multi-channel hardware/software codesign on a software radio platform." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3400.

Full text
Abstract:
Thesis (M.S.)--George Mason University, 2008.
Vita: p. 89. Thesis director: David D. Hwang. Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering. Title from PDF t.p. (viewed Mar. 9, 2009). Includes bibliographical references (p. 85-88). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
12

Lu, Lipin. "Simulation Software and Hardware for Teaching Ultrasound." Scholarly Repository, 2008. http://scholarlyrepository.miami.edu/oa_theses/143.

Full text
Abstract:
Over the years, medical imaging modalities have evolved drastically. Accordingly, the need for conveying the basic imaging knowledge to future specialists and other trainees becomes even more crucial for devoted educators. Understanding the concepts behind each imaging modality requires a plethora of advanced physics, mathematics, mechanics and medical background. Absorbing all of this background information is a daunting task for any beginner. This thesis focuses on developing an ultrasound imaging education tutorial with the goal of easing the process of learning the principles of ultrasound. This tutorial will utilize three diverse approaches including software and hardware applications. By performing these methodologies from different perspectives, not only will the efficiency of the training be enhanced, but also the trainee?s understanding of crucial concepts will be reinforced through repetitive demonstration. The first goal of this thesis was developing an online medical imaging simulation system and deploying it on the website of the University of Miami. In order to construct an easy, understandable, and interactive environment without deteriorating the important aspects of the ultrasound principles, interactive flash animations (developed by Macromedia Director MX) were used to present concepts via graphic-oriented simulations. The second goal was developing a stand-alone MATLAB program, intended to manipulate the intensity of the pixels in the image in order to simulate how ultrasound images are derived. Additionally, a GUI (graphic user interface) was employed to maximize the accessibility of the program and provide easily adjustable parameters. The GUI window enables trainees to see the changes in outcomes by altering different parameters of the simulation. The third goal of this thesis was to incorporating an actual ultrasound demonstration into the tutorial. This was achieved by using a real ultrasound transducer with a pulse/receiver so that trainees could observe actual ultrasound phenomena, and view the results using an oscilloscope. By manually adjusting the panels on the pulse/ receiver console, basic A-mode ultrasound experiments can be performed with ease. By combining software and hardware simulations, the ultrasound education package presented in this thesis will help trainees more efficiently absorb the various concepts behind ultrasound.
APA, Harvard, Vancouver, ISO, and other styles
13

Chakaravarthy, Ravikumar V. "IP routing lookup: hardware and software approach." Texas A&M University, 2003. http://hdl.handle.net/1969.1/2459.

Full text
Abstract:
The work presented in this thesis is motivated by the dual goal of developing a scalable and efficient approach for IP lookup using both hardware and software approach. The work involved designing algorithms and techniques to increase the capacity and flexibility of the Internet. The Internet is comprised of routers that forward the Internet packets to the destination address and the physical links that transfer data from one router to another. The optical technologies have improved significantly over the years and hence the data link capacities have increased. However, the packet forwarding rates at the router have failed to keep up with the link capacities. Every router performs a packet-forwarding decision on the incoming packet to determine the packet??s next-hop router. This is achieved by looking up the destination address of the incoming packet in the forwarding table. Besides increased inter-packet arrival rates, the increasing routing table sizes and complexity of forwarding algorithms have made routers a bottleneck in the packet transmission across the Internet. A number of solutions have been proposed that have addressed this problem. The solutions have been categorized into hardware and software solutions. Various lookup algorithms have been proposed to tackle this problem using software approaches. These approaches have proved more scalable and practicable. However, they don??t seem to be able to catch up with the link rates. The first part of my thesis discusses one such software solution for routing lookup. The hardware approaches today have been able to match up with the link speeds. However, these solutions are unable to keep up with the increasing number of routing table entries and the power consumed. The second part of my thesis describes a hardware-based solution that provides a bound on the power consumption and reduces the number of entries required to be stored in the routing table.
APA, Harvard, Vancouver, ISO, and other styles
14

Dadashikelayeh, Majid. "Integrated hardware-software diagnosis of intermittent faults." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50059.

Full text
Abstract:
Intermittent hardware faults are hard to diagnose as they occur non-deterministically. Hardware-only diagnosis techniques incur significant power and area overheads. On the other hand, software-only diagnosis techniques have low power and area overheads, but have limited visibility into many micro-architectural structures and hence cannot diagnose faults in them. To overcome these limitations, we propose a hardware-software integrated framework for diagnosing intermittent faults. The hardware part of our framework, called SCRIBE continuously records the resource usage information of every instruction in the processor, and exposes it to the software layer. SCRIBE has 0.95% on-chip area overhead, incurs a performance overhead of 12% and power overhead of 9%, on average. The software part of our framework is called SIED and uses backtracking from the program's crash dump to find the faulty micro-architectural resource. Our technique has an average accuracy of 84% in diagnosing the faulty resource, which in turn enables fine-grained deconfiguration with less than 2% performance loss after deconfiguration.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
15

Zeffer, Håkan. "Hardware–Software Tradeoffs in Shared-Memory Implementations." Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86369.

Full text
Abstract:
Shared-memory architectures represent a class of parallel computer systems commonly used in the commercial and technical market. While shared-memory servers typically come in a large variety of configurations and sizes, the advance in semiconductor technology have set the trend towards multiple cores per die and multiple threads per core. Software-based distributed shared-memory proposals were given much attention in the 90s. But their promise of short time to market and low cost could not make up for their unstable performance. Hence, these systems seldom made it to the market. However, with the trend towards chip multiprocessors, multiple hardware threads per core and increased cost of connecting multiple chips together to form large-scale machines, software coherence in one form or another might be a good intra-chip coherence solution. This thesis shows that data locality, software flexibility and minimal processor support for read and write coherence traps can offer good performance, while removing the hard limit of scalability. Our aggressive fine-grained software-only distributed shared-memory system exploits key application properties, such as locality and sharing patterns, to outperform a hardware-only machine on some benchmarks. On average, the software system is 11 percent slower than the hardware system when run on identical node and interconnect hardware. A detailed full-system simulation study of dual core CMPs, with multiple hardware threads per core and minimal processor support for coherence traps is on average one percent slower than its hardware-only counterpart when some flexibility is taken into account. Finally, a functional full-system simulation study of an adaptive coherence-batching scheme shows that the number of coherence misses can be reduced with up to 60 percent and bandwidth consumption reduced with up to 22 percent for both commercial and scientific applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Kägi, Thomas. "System software support for possible hardware deficiency." Thesis, London Metropolitan University, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.567824.

Full text
Abstract:
To days , computer systems are applied in safety critical areas such as military, aviation, intensive health care, industrial control, space exploration etc. All these areas demand highest possible reliability of functional operation. However, ionized particles and radiation impact on current semiconductor hardware leads inevitable to faults in the system. It is expected that such phenomena will be observed much more often in the future due to the ongoing miniaturisation of hardware structures. In this thesis we want to tackle the question of how system software should be designed in the event of such faults, and which fault tolerance features it should provide for highest reliability. We also show how the system software interacts with the hardware to tolerate these faults. In a first step, we analyse and further develop the theory of fault tolerance to understand the different ways how to increase the reliability of a system. Ultimately, the key is to use redundancy in all its different appearances. We we revise and further develop the general algorithm of fault tolerance (GAFT) with its three main processes hardware testing, preparation for recovery and the recovery procedure as our approach to the design of a fault tolerant system. For each of the three processes, we analyse the requirements and properties theoretically and give possible implementation scenarios. Based on the theoretical results, we derive an Oberon based programming language with direct support of the three processes of GAFT. In the last part of the thesis, we analyse a simulator based proof of concept implementation of a novel fault tolerant processor architecture (ERRIC) and its newly developed runtime system feature wise and performance wise.
APA, Harvard, Vancouver, ISO, and other styles
17

Egi, Norbert. "Software virtual routers on commodity hardware architectures." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539674.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Knutsen, Henrik Holenbakken. "Enhancing Software Portability with Hardware Parametrized Autotuning." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-24568.

Full text
Abstract:
Akselerator teknologi skal brukes til å muliggjøre fortsatt skalering av numerisk software. Ytelses-begrensninger som ett resultat av å flytte en applikasjon fra arkitektur til arkitektur er ett problem, siden egenskapene til arkitekturer endres raskere enn programmer kan oppdateres. For å øke flyttbarheten til kode må program-logikken og egenskapene til arkitekturen uttrykkes som parametre, slik at utforskingen av forskjellige maskin-spesifikke optimaliseringer kan delvis automatiseres.Dette prosjektet søker å undersøke moderne metoder og verktøy for å muliggjøre automatisering av å flytte kodebaser mellom arkitekturer uten ytelsestap. Teorien vil tas i bruk på en applikasjon fra PRACE prosjektet
APA, Harvard, Vancouver, ISO, and other styles
19

Bissland, Lesley. "Hardware and software aspects of parallel computing." Thesis, University of Glasgow, 1996. http://theses.gla.ac.uk/3953/.

Full text
Abstract:
Part 1 (Chapters 2,3 and 4) is concerned with the development of hardware for multiprocessor systems. Some of the concepts used in digital hardware design are introduced in Chapter 2. These include the fundamentals of digital electronics such as logic gates and flip-flops as well as the more complicated topics of rom and programmable logic. It is often desirable to change the network topology of a multiprocessor machine to suit a particular application. The third chapter describes a circuit switching scheme that allows the user to alter the network topology prior to computation. To achieve this, crossbar switches are connected to the nodes, and the host processor (a PC) programs the crossbar switches to make the desired connections between the nodes. The hardware and software required for this system is described in detail. Whilst this design allows the topology of a multiprocessor system to be altered prior to computation, the topology is still fixed during program run-time. Chapter 4 presents a system that allows the topology to be altered during run-time. The nodes send connection requests to a control processor which programs a crossbar switch connected to the nodes. This system allows every node in a parallel computer to communicate directly with every other node. The hardware interface between the nodes and the control processor is discussed in detail, and the software on the control processor is also described. Part 2 (Chapters 5 and 6) of this thesis is concerned with the parallelisation of a large molecular mechanics program. Chapter 5 describes the fundamentals of molecular mechanics such as the steric energy equation and its components, force field parameterisation and energy minimisation. The implementation of a novel programming (COMFORT) and hardware (the BB08) environment into a parallel molecular mechanics (MM) program is presented in Chapter 6. The structure of the sequential version of the MM program is detailed, before discussing the implementation of the parallel version using COMFORT and the BB08.
APA, Harvard, Vancouver, ISO, and other styles
20

Dave, Nirav Hemant 1982. "A unified model for hardware/software codesign." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68171.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 179-188).
Embedded systems are almost always built with parts implemented in both hardware and software. Market forces encourage such systems to be developed with dierent hardware-software decompositions to meet dierent points on the price-performance-power curve. Current design methodologies make the exploration of dierent hardware-software decompositions difficult because such exploration is both expensive and introduces signicant delays in time-to-market. This thesis addresses this problem by introducing, Bluespec Codesign Language (BCL), a united language model based on guarded atomic actions for hardware-software codesign. The model provides an easy way of specifying which parts of the design should be implemented in hardware and which in software without obscuring important design decisions. In addition to describing BCL's operational semantics, we formalize the equivalence of BCL programs and use this to mechanically verify design refinements. We describe the partitioning of a BCL program via computational domains and the compilation of dierent computational domains into hardware and software, respectively.
by Nirav Dave.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Cantu, Roy R. "An investigation of hardware and software mindsets." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38802.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 44-46).
by Roy R. Cantu, III.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Puzović, Miloš. "Hardware/software interface for dynamic multicore scheduling." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bappudi, Bhargav. "Example Modules for Hardware-software Co-design." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1470043472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kulkarni, Pallavi Anil. "Hardware acceleration of software library string functions." Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1447245.

Full text
Abstract:
Thesis (M.S. in Computer Engineering)--S.M.U., 2007.
Title from PDF title page (viewed Nov. 19, 2009). Source: Masters Abstracts International, Volume: 46-03, page: 1577. Adviser: Mitch Thornton. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
25

Cuenca, Desiree. "Hurricane data collection hardware and analysis software." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000118.

Full text
Abstract:
Thesis (M.E.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains vi, 119 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
26

Olson, John Thomas. "Hardware/software partitioning utilizing Bayesian belief networks." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284156.

Full text
Abstract:
In heterogeneous systems design, partitioning of the functional specifications into hardware and software components is an important procedure. Often, a hardware platform is chosen and the software is mapped onto the existing partial solution, or the actual partitioning is performed in an ad hoc manner. The partitioning approach presented here is novel in that it uses Bayesian Belief Networks (BBNs) to categorize functional components into hardware and software classifications. The BBN's ability to propagate evidence permits the effects of a classification decision made about one function to be felt throughout the entire network. In addition, because BBNs have a belief of hypotheses as their core, a quantitative measurement as to the correctness of a partitioning decision is achieved. In this research, the motivation and background material are presented first. Next, a methodology for automatically generating the qualitative, structural portion of BBN, and the quantitative link matrices is given. Lastly, a case study of a programmable thermostat is developed to illustrate the BBN approach. The outcomes of the partitioning process are discussed and placed in a larger design context, called model-based Codesign.
APA, Harvard, Vancouver, ISO, and other styles
27

Lei, Li. "Hardware/Software Interface Assurance with Conformance Checking." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2323.

Full text
Abstract:
Hardware/Software (HW/SW) interfaces are pervasive in modern computer systems. Most of HW/SW interfaces are implemented by devices and their device drivers. Unfortunately, HW/SW interfaces are unreliable and insecure due to their intrinsic complexity and error-prone nature. Moreover, assuring HW/SW interface reliability and security is challenging. First, at the post-silicon validation stage, HW/SW integration validation is largely an ad-hoc and time-consuming process. Second, at the system deployment stage, transient hardware failures and malicious attacks make HW/SW interfaces vulnerable even after intensive testing and validation. In this dissertation, we present a comprehensive solution for HW/SW interface assurance over the system life cycle. This solution is composited of two major parts. First, our solution provides a systematic HW/SW co-validation framework which validates hardware and software together; Second, based on the co-validation framework, we design two schemes for assuring HW/SW interfaces over the system life cycle: (1) post-silicon HW/SW co-validation at the post-silicon validation stage; (2) HW/SW co-monitoring at the system deployment stage. Our HW/SW co-validation framework employs a key technique, conformance checking which checks the interface conformance between the device and its reference model. Furthermore, property checking is carried out to verify system properties over the interactions between the reference model and the driver. Based on the conformance between the reference model and the device, properties hold on the reference model/driver interface also hold on the device/driver interface. Conformance checking discovers inconsistencies between the device and its reference model thereby validating device interface implementations of both sides. Property checking detects both device and driver violations of HW/SW interface protocols. By detecting device and driver errors, our co-validation approach provides a systematic and ecient way to validate HW/SW interfaces. We developed two software tools which implement the two assurance schemes: DCC (Device Conformance Checker), a co-validation framework for post-silicon HW/SW integration validation; and CoMon (HW/SW Co-monitoring), a runtime verication framework for detecting bugs and malicious attacks across HW/SW interfaces. The two software tools lead to discovery of 42 bugs from four industry hardware devices, the device drivers, and their reference models. The results have demonstrated the signicance of our approach in HW/SW interface assurance of industry applications.
APA, Harvard, Vancouver, ISO, and other styles
28

Johansson, Hanna. "Interdisciplinary Requirement Engineering for Hardware and Software Development : from a Hardware Development Perspective." Thesis, Linköpings universitet, Industriell miljöteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139097.

Full text
Abstract:
Complexity in products is increasing, and still there is lack of a shared design language ininterdisciplinary development projects. The research questions of the thesis concern differencesand similarities in requirement handling, and integration, current and future. Futureintegration is given more focus with a pair of research questions highlighting obstacles andenablers for increased integration. Interviews were performed at four different companieswith complex development environments whose products originated from different fields;hardware, software, and service. Main conclusions of the thesis are: Time-frames in different development processes are very different and hard to unite. Internal standards exist for overall processes, documentation, and modification handling. Traceability is poorly covered in theory whilst being a big issue in companies. Companies understand that balancing and compromising of requirements is critical fora successful final product. The view on future increased interdisciplinary development is that there are more obstaclesto overcome than enablers supporting it. Dependency is seen as an obstacle inthis regard and certain companies strive to decrease it.The thesis has resulted in general conclusions and further studies is suggested into morespecific areas such as requirement handling tools, requirement types, and traceability.
APA, Harvard, Vancouver, ISO, and other styles
29

Sheikh, Bilal Tahir. "Interdisciplinary Requirement Engineering for Hardware and Software Development - A Software Development Perspective." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-147886.

Full text
Abstract:
The software and hardware industries  are growing day by day, which makes their development environments more complex. This situation has a huge impact on the companies which have interdisciplinary development  environments. To handle this situation, a common platform is required which can be acted as a bridge between hardware and software development to ease their tasks in an organized way. The research questions of the thesis aim to get information about differences and similarities in requirements handling, and their integration in current and future prospectives. The future prospect of integration is considered as a focused area. Interviews were conducted to get feedback from four different companies having complex development environments.
APA, Harvard, Vancouver, ISO, and other styles
30

Lindholm, Jeffery L. "Utilizing IXP1200 hardware and software for packet filtering." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://handle.dtic.mil/100.2/ADA429830.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, Dec. 2004.
Thesis Advisor(s): Wen, Su ; Gibson, John. "December 2004." Includes bibliographical references (p. 63-64). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
31

Freitas, Arthur. "Hardware/Software Co-Verification Using the SystemVerilog DPI." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700941.

Full text
Abstract:
During the design and verification of the Hyperstone S5 flash memory controller, we developed a highly effective way to use the SystemVerilog direct programming interface (DPI) to integrate an instruction set simulator (ISS) and a software debugger in logic simulation. The processor simulation was performed by the ISS, while all other hardware components were simulated in the logic simulator. The ISS integration allowed us to filter many of the bus accesses out of the logic simulation, accelerating runtime drastically. The software debugger integration freed both hardware and software engineers to work in their chosen development environments. Other benefits of this approach include testing and integrating code earlier in the design cycle and more easily reproducing, in simulation, problems found in FPGA prototypes.
APA, Harvard, Vancouver, ISO, and other styles
32

Junered, Marcus. "Enabling hardware technology for GNSS software radio research." Licentiate thesis, Luleå : Luleå University of Technology, 2007. http://epubl.ltu.se/1402-1757/2007/32/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

O'Nils, Mattias. "Specification, synthesis and validation of hardware/software interfaces." Doctoral thesis, Stockholm, 1999. http://www.lib.kth.se/abs99/onil0616.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Prokopski, Grzegorz. "Optimizing software-hardware interplay in efficient virtual machines." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66889.

Full text
Abstract:
To achieve the best performance, most computer languages are compiled, either ahead of time and statically, or dynamically during runtime by means of a Just-in-Time (JIT) compiler. Optimizing compilers are complex, however, and for many languages such as Ruby, Python, PHP, etc., an interpreter-based Virtual Machine (VM) offers a more flexible and portable implementation method, and moreover represents an acceptable trade-off between runtime performance and development costs. VM performance is typically maximized by use of the basic direct threading interpretation technique which, unfortunately, interacts poorly with modern branch predictors. More advanced techniques, like code-copying have been proposed [RS96,PR98,EG03c,Gag02] but have remained practically infeasible due to important safety concerns. On this basis we developed two cost-efficient, well-performing solutions. First, we designed a C/C++ language extension that allows programmers to express the need for the special safety guarantees of code-copying. Our low-maintenance approach is designed as an extension to a highly-optimizing, industry-standard GNU C Compiler (GCC), and we apply it to Java, OCaml, and Ruby interpreters. We tested the performance of these VMs on several architectures, gathering extensive analysis data for both software and hardware performance. Significant improvement is possible, 2.81 times average speedup for OCaml and 1.44 for Java on Intel 32-bit, but varies by VM and platform. We provide detailed analysis and design guidelines for helping developers predict and evaluate the benefit provided by safe code-copying. In our second approach we focused on alleviating the limited scope of optimizations in code-copying with an ahead-of-time-based (AOT) approach. A source-based approach to grouping bytecode instructions together allows for more extensive cross-bytecode optimizations, and so we develop a caching compilation server
Pour avoir une meilleure performance, la plupart des langages de programmation sont compilés, soit avant leur exécution et statiquement, ou dynamiquement, pendant leur utilisation, à l'aide d'un compilateur "Just-in-Time" (JIT). Cependant, les compilateurs avec desfonctionnalités d'optimisation sont complexes, et plusieurs langages, tel que Ruby, Python, PHP, profitent mieux d'une solution flexible et portable tel qu'une machine virtuelle (MV) interprétée. Cette solution offre un échange acceptable entre la performance d'exécution et les coûts de développement. La performance de la MV est typiquement maximisée par l'utilisation de la technique d'interprétation "direct threading", qui, malheureusement, interagit mal avec les prédicteurs de branches moderne. Des techniques plus avancées, tel que "code-copying" ont été proposées [RS96,PR98,EG03c,Gag02], mais ne sont pas applicable en pratique à cause de préoccupation de sécurité. C'est sur les bases suivantes que nous avons développé deux solutions coût-efficace qui offrent une bonne performance. Premièrement, nous avons développé une extension au langage C qui permet aux programmeurs d'exprimer le besoin pour des garanties spéciales pour la technique de "code-copying". Notre technique, qui requiert très peu de maintenance, est développée comme une extension à un compilateur qui a non seulement des fonctionnalités d'optimisation très élaborées mais qui est aussi un standard d'industrie, le "GNU C Compiler" (GCC). Nous pouvons alors appliquer cette technique sur les interpréteur Java, OCaml et Ruby. Nous avons évalué la performance de ces MV sur plusieurs architectures, en collectionnant de l'information pour analyser la performance logiciel et matériel. La marge d'amélioration possible est très grande, une accélération d'ordre 2.81 pour OCaml et 1.44 pour Java sur l'architecture Intel 32-bit. Il est importan
APA, Harvard, Vancouver, ISO, and other styles
35

Wiangtong, Theerayod. "Hardware/software partitioning and scheduling for reconfigurable systems." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.404907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dimitrov, Martin. "Architectural support for improving system hardware/software reliability." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4533.

Full text
Abstract:
It is a great challenge to build reliable computer systems with unreliable hardware and buggy software. On one hand, software bugs account for as much as 40% of system failures and incur high cost, an estimate of $59.5B a year, on the US economy. On the other hand, under the current trends of technology scaling, transient faults (also known as soft errors) in the underlying hardware are predicted to grow at least in proportion to the number of devices being integrated, which further exacerbates the problem of system reliability. We propose several methods to improve system reliability both in terms of detecting and correcting soft-errors as well as facilitating software debugging. In our first approach, we detect instruction-level anomalies during program execution. The anomalies can be used to detect and repair soft-errors, or can be reported to the programmer to aid software debugging. In our second approach, we improve anomaly detection for software debugging by detecting different types of anomalies as well as by removing false-positives. While the anomalies reported by our first two methods are helpful in debugging single-threaded programs, they do not address concurrency bugs in multi-threaded programs. In our third approach, we propose a new debugging primitive which exposes the non-deterministic behavior of parallel programs and facilitates the debugging process. Our idea is to generate a time-ordered trace of events such as function calls/returns and memory accesses in different threads. In our experience, exposing the time-ordered event information to the programmer is highly beneficial for reasoning about the root causes of concurrency bugs.
ID: 028916717; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 110-119).
Ph.D.
Doctorate
School of Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
37

Allan, Malcolm. "Hardware/software strategies for the DC brushless motor." Thesis, Glasgow Caledonian University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kasture, Harshad. "A hardware and software architecture for efficient datacenters." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/109005.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-131).
Datacenters host an increasing amount of the world's compute, powering a diverse set of applications that range from scientific computing and business analytics to massive online services such as social media and online maps. Despite their growing importance, however, datacenters suffer from low resource and energy efficiency, using only 10-30% of their compute capacity on average. This overprovisioning adds billions of dollars annually to datacenter equipment costs, and wastes significant energy. This low efficiency stems from two sources. First, latency-critical applications, which form the backbone of user-facing, interactive services, need guaranteed low response times, often a few tens of milliseconds or less. By contrast, current systems are architected to maximize long-term, average performance (e.g., throughput over a period of seconds), and cannot provide the short-term performance guarantees needed by these applications. The stringent performance requirements of latency-critical applications make power management challenging, and make it hard to colocate them with other applications, as interference in shared resources hurts their responsiveness. Second, throughput-oriented batch applications, while easier to colocate, experience performance degradation as multiple colocated applications compete for shared resources on servers. This thesis presents novel hardware and software techniques that improve resource and energy efficiency for both classes of applications. First, Ubik is a dynamic cache partitioning technique that allows latency-critical and batch applications to safely share the last-level cache, maximizing batch throughput while providing latency guarantees for latency-critical applications. Ubik accurately predicts the transients that result when caches are reconfigured, and can thus mitigate latency degradation due to performance inertia, i.e., the loss of performance as an application transitions between steady states. Second, Rubik is a fine-grain voltage and frequency scaling scheme that quickly and accurately adapts to short-term load variations in latency-critical applications to minimize dynamic power consumption without hurting latency. Rubik uses a novel, lightweight statistical model that accurately predicts queued work, and accounts for variations in per-request compute requirements as well as queuing delays. Further, Rubik improves system utilization by allowing latency-critical and batch applications to safely share cores, using frequency scaling to mitigate performance degradation due to interference in per-core resources such as private caches. Third, Shepherd is a cluster scheduler that uses per-node cache-partitioning decisions to drive application placement across machines. Shepherd uses detailed application profiling data to partition the last-level cache on each machine and to predict the performance of colocated applications, and uses randomized search to find a schedule that maximizes throughput. A common theme across these techniques is the use of lightweight, general-purpose architectural support to provide performance isolation and fast state transitions, coupled with intelligent software runtimes that configure the hardware to meet application performance requirements. Unlike prior work, which often relies on heuristics, these techniques use accurate analytical modeling to guide resource allocation, boosting efficiency while satisfying applications' disparate performance goals. Ubik allows latency-critical and batch applications to be safely and efficiently colocated, improving batch throughput by an average of 17% over a static partitioning scheme while guaranteeing tail latency. Rubik further allows these two classes of applications to share cores, reducing datacenter power consumption by up to 31% while using 41% fewer machines over a scheme that segregates these applications. Shepherd improves batch throughput by 39% over a randomly scheduled, unpartitioned baseline, and significantly outperforms scheduling-only and partitioning-only approaches.
by Harshad Kasture.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Salehi-Abari, Omid. "Software-hardware systems for the Internet-of-Things." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115767.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [187]-201).
Although interest in connected devices has surged in recent years, barriers still remain in realizing the dream of the Internet of Things (IoT). The main challenge in delivering IoT systems stems from a huge diversity in their demands and constraints. Some applications work with small sensors and operate using minimal energy and bandwidth. Others use high-data-rate multimedia and virtual reality systems, which require multiple-gigabits-per-second throughput and substantial computing power. While both extremes stress the computation, communications, and energy resources available to the underlying devices, each intrinsically requires different solutions to satisfy its needs. This thesis addresses both bandwidth and energy constraints by developing custom software-hardware systems. To tackle the bandwidth constraint, this thesis introduces three systems. First, it presents AirShare, a synchronized abstraction to the physical layer, which enables the direct implementation of diverse kinds of distributed protocols for loT sensors. This capability results in a much higher throughput in today's IoT networks. Then, it presents Agile-Link and MoVR, new millimeter wave devices and protocols which address two main problems that prevent the adoption of millimeter wave frequencies in today's networks: signal blockage and beam alignment. Lastly, this thesis shows how these systems enable new IoT applications, such as untethered high-quality virtual reality. To tackle the energy constraint, this thesis introduces a VLSI chip, which is capable of performing a million-point Fourier transform in real-time, while consuming 40 times less power than prior fast Fourier transforms. Then, it presents Caraoke, a small, low-cost and low-power sensor, which harvests its energy from solar and enables new smart city applications, such as traffic management and smart parking.
by Omid Salehi-Abari.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
40

Liucheng, Miao, Su Jiangang, and Feng Bingxuan. "HARDWARE-INDEPENDENT AND SOFTWARE-INDEPENDENT IN SYSTEM DESIGN." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606803.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Today, open technology has been widely used in computer and other field, including software and hardware. The “Open Technology” about hardware and software can be called “Hardware-Independent and Software-Independent”(For example, Open Operating System in Computer.). But, in telemetry technology field, the system design based on “Hardware-Independent and Software-Independent” is primary stage. In this paper, the following question will be discussed: a. Why telemetry system design needs “open technology” b. How to accomplish system design based on “Hardware-Independent and Software-Independent” c. The application prospect of “hardware-Independent and Software-Independent” in system design.
APA, Harvard, Vancouver, ISO, and other styles
41

Powell, Richard, and Jeff Kuhn. "HARDWARE- VS. SOFTWARE-DRIVEN REAL-TIME DATA ACQUISITION." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608291.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
There are two basic approaches to developing data acquisition systems. The first is to buy or develop acquisition hardware and to then write software to input, identify, and distribute the data for processing, display, storage, and output to a network. The second is to design a system that handles some or all of these tasks in hardware instead of software. This paper describes the differences between software-driven and hardware-driven system architectures as applied to real-time data acquisition systems. In explaining the characteristics of a hardware-driven system, a high-performance real-time bus system architecture developed by L-3 will be used as an example. This architecture removes the bottlenecks and unpredictability that can plague software-driven systems when applied to complex real-time data acquisition applications. It does this by handling the input, identification, routing, and distribution of acquired data without software intervention.
APA, Harvard, Vancouver, ISO, and other styles
42

Akbari, Kazem. "A new neurocomputing approach: Software and hardware designs." Case Western Reserve University School of Graduate Studies / OhioLINK, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=case1058207460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

TIWARI, ANURAG. "HARDWARE/SOFTWARE CO-DEBUGGING FOR RECONFIGURABLE COMPUTING APPLICATIONS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1011816501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wee, Sewook. "Atlas : software development environment for hardware transactional memory /." May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lindholm, Jeffery L. "Utilizing ISP1200 hardware and software for packet filtering /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FLindholm.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Reis, João Gabriel. "A framework for predictable hardware/software component reconfiguration." reponame:Repositório Institucional da UFSC, 2016. https://repositorio.ufsc.br/xmlui/handle/123456789/173819.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2016.
Made available in DSpace on 2017-02-28T04:10:39Z (GMT). No. of bitstreams: 1 344671.pdf: 991636 bytes, checksum: 2e9b1460f30d38b6d198a17b08fe6d42 (MD5) Previous issue date: 2016
Abstract : Rigid partitions of components or modules in a hardware/software co-design flow can lead to suboptimal choices in embedded systems with dynamic or unpredictable runtime requirements. Field-Programmable Gate Array (FPGA) reconfiguration can help systems cope with dynamic non-functional requirements such as performance and power, hardware defects due to Negative-Bias Temperature Instability (NBTI) and Process, Voltage and Temperature (PVT) variations, or application requirements unforeseen at design time. This work proposes a framework for reconfigurable components whereby the reconfiguration of a component implementation is performed transparently without user intervention. The reconfiguration process is confined in system?s idle time without interfering with or being interfered by other activities occurring in the system or even peripherals performing I/O. For components with multiple implementations, our approach opportunistically and speculatively monitors system load and performance parameters to check when the reconfiguration can start. The framework differs from previous approaches in its syntax and semantics for reconfigurable components which are preserved across the multiple implementations in different substrates and the reconfiguration process that can be split into multiple steps. To quantify the impact of I/O interference on FPGA reconfiguration, we measured the execution time when loading bitstreams containing hardware components implementations from memory to the FPGA reconfiguration interface with multiple peripherals performing I/O in parallel. Moreover, a Private Automatic Branch Exchange (PABX) case study investigated the deployment of reconfigurable components in a scenario with timing constraints. A reconfiguration policy for the PABX components was proposed to deal with the unpredictable number of calls it receives by using reconfigurable hardware resources without degrading voice quality due to reconfiguration. Furthermore, we explored trade-offs between power consumption, execution time, and accuracy in a set of reconfigurable mathematical components.

O particionamento estático de componentes ou módulos ao realizar o co-design hardware/software pode levar a escolhas insatisfatórias em sistemas embarcados com requisitos dinâmicos e imprevisíveis durante tempo de execução. A reconfiguração dinâmica de Field-Programmable Gate Arrays {FPGAs) pode ajudar sistemas a se adaptar em requisitos dinâmicos e não funcionais como desempenho e consumo de energia, defeitos de hardware devido ao fenômeno Negative-Bias Temperature Instability (NBTI) e variações de Processo, Tensão e Temperatura ou ainda requisitos da aplicação que não foram levados em consideração em tempo de projeto. Esse trabalho propõe um framework para componentes reconfiguráveis onde a reconfiguração da implementação de um componente é realizada de maneira transparente e sem a intervenção do usuário. O processo de reconfiguração é confinado no tempo ocioso do sistema sem interferir ou sofrer interferência de outras atividades ou mesmo periféricos realizando operações de entrada/saída. Para componentes com múltiplas implementações, nossa abordagem monitora de maneira especulativa a carga do sistema e contadores de desempenho para escolher o momento em que a reconfiguração deve se iniciar. O framework se difere de trabalhos anteriores devido à sintaxe e semântica para componentes reconfiguráveis que são preservadas nas múltiplas implementações e em diferentes substratos e no processo de reconfiguração que pode ser dividido em diversos passos. Para quantificar o impacto da interferência de entrada/saída na reconfiguração de FPGAs, foi medido o tempo de execução para carregar bitstreams contendo implementações de componentes em hardware da memória para a interface de reconfiguração de FPGA com diversos periféricos realizando operações de entrada/saída em paralelo. Além disso, o estudo de caso de um Private Automatic Branch Exchange (PABX) investigou o uso de componentes reconfiguráveis num cenário com requisitos temporais. Uma política de reconfiguração para os componentes do (PABX) foi proposta para lidar o número imprevisível de chamadas recebidas através de recursos reconfiguráveis sem degradar a qualidade da reprodução da voz devido à reconfiguração. Foram também explorados os trade-offs entre consumo de energia, tempo de execução e exatidão dos resultados num conjunto de componentes implementando operações matemáticas.
APA, Harvard, Vancouver, ISO, and other styles
47

Teng, Chang-Fang, and 鄧昌芳. "The economical analysis of hardware and software firms to develop software department and hardware department." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/00329191132150560405.

Full text
Abstract:
碩士
佛光大學
經濟學系
96
Since 1989, GPS industry is already approaching on the international stage, GPS application is used to professional purpose and it is toward the consumption product for main market. All electrons commodity are attached GPS’s receiver to yield a lot of growing area. Let GPS’s hardware and software firms not only have their professional industries but also they consider to using their development departments to anticipant to reach the economies of scale to reduce the products cost and expand the company of management scope. This structure of paper considers the Normal form on the strategy game model. We have through the strategic game to elaborate and evaluate from advantages and disadvantages side on the GPS industry. We can use it to find the right on track to discover which factors can cause the influence of hardware and software firms to choose develop another department to increasing the company benefit.
APA, Harvard, Vancouver, ISO, and other styles
48

Carlson, Ryan L. "A study of hardware/software multithreading." Thesis, 1998. http://hdl.handle.net/1957/33562.

Full text
Abstract:
As the design of computers advances, two important trends have surfaced: The exploitation of parallelism and the design against memory latency. Into these two new trends has come the Multithreaded Virtual Processor (MVP). Based on a standard superscalar core, the MVP is able to exploit both Instruction Level Parallelism (ILP) and, utilizing the concepts of multithreading, is able to further exploit Thread Level Parallelism (TLP) in program code. By combining both hardware and software multithreading techniques into a new hybrid model, the MVP is able to use fast hardware context switching techniques along with both hardware and software scheduling. The new hybrid creates a processor capable of exploiting long memory latency operations to increase parallelism, while introducing both minimal software overhead and hardware design changes. This thesis will explore the MVP model and simulator and provide results that illustrate MVP's effectiveness and demonstrate its recommendation to be included in future processor designs. Additionally, the thesis will show that MVP's effectiveness is governed by four main considerations: (1) The data set size relative to the cache size, (2) the number of hardware contexts/threads supported, (3) the amount of locality within the data sets, and (4) the amount of exploitable parallelism within the algorithms.
Graduation date: 1999
APA, Harvard, Vancouver, ISO, and other styles
49

Yeh, Jinn-Wang, and 葉進旺. "The Study on Hardware/Software CoDesign." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/02143895370729762762.

Full text
Abstract:
碩士
國立交通大學
電子工程系
88
This thesis investigates an effective approach to the system-level design of multimedia signal processing applications. To design these systems, we use the hardware/software codesign approach, which allows the hardware and software designs to be tightly coupled throughout the design process. Given a specification of system functionality and constraints, we propose a model to describe the system. After the model has been analyzed, partitioning is used to determine the parts of the system functionality that are delegated to application-specific hardware and the software that runs on the processor. Based on the result of hardware/software partitioning, we determine the optimal implementation of a system. We also explore issues concerning system synchronization and the implementation of hardware/software interface to accommodate communications between various parts of the system. This hardware and software codesign approach proposed makes it possible to build a time-constrained signal processing system on a chip using programmable parts and application-specific units. We use a media processor design as an example. The verification method and simulation results are also given in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
50

Yen, Wen-Chi, and 顏文祺. "A Hardware/Software-Concurrent JPEG2000 Encoder." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/24263295495852495855.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
92
We implement a JPEG2000 encoder based on an internally developed hardware/software codesign methodology. We emphasize on the concurrent execution of hardware accelerator IPs and software running on the CPU. In a programmable SOC platform, hardware acceleration of DWT and EBCOT Tier-1 sequentially gives us 70% reduction in total execution time. The proposed concurrent scheme achieves additional 14% saving. We describe our experience in bringing up such a system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography