Tesis sobre el tema "Real time performance"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Real time performance.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Real time performance".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Huh, Eui-Nam. "Certification of real-time performance for dynamic, distributed real-time systems". Ohio : Ohio University, 2002. http://www.ohiolink.edu/etd/view.cgi?ohiou1178732244.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rajkhowa, Priyanka. "Exploiting soft computing for real time performance". College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3928.

Texto completo
Resumen
Thesis (M.S.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wikensjö, Andreas. "Performance Optimisation with a Real-Time Database". Thesis, Uppsala University, Department of Information Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-111168.

Texto completo
Resumen

Embedded control systems are gaining an increasing amount of responsibility in today's vehicles and industrial machines. As mechanical components are replaced by software, the complexity of control systems and the amount of data they are responsible for greatly increase. Generally there are two approaches to dealing with this huge amount of information, but both have flaws which can reduce system performance, or in the worst case scenario cause fatal system failures with potential to cause loss of human lives.

The two approaches are creation of large purpose-built data structures with shared variables, and implementation of a database. The first is often not scalable, becomes tremendously complex, and has high development costs, while the latter has the common downside that many databases are simply too slow. This study will explore the possibilities of using a real-time database to overcome these issues.

As part of one of their control systems, CC Systems have developed the Diagnostic Runtime Engine (DRE) which keeps track of the state of the system. The database currently used in the DRE is too slow and this thesis project aims to replace it with a Mimer SQL Real-time Edition database. This real-time database utilises a unique concept called database pointers to access data in hard real-time. Although the real-time database comes with some issues and limitations of its own, this study shows that most of them can be worked around rather easily. Implementation of the real-time database would allow the DRE to handle incoming signals more than 50 times faster than the demands, as well as heavily decrease the complexity of the DRE's source code. Mimer SQL Real-time Edition works entirely with in-memory copies of database tables, and the tables must be explicitly saved, or flushed, to the disk. In order to optimise the flush we need to know roughly how often we can expect incoming signals, but such information is currently not available. Instead this thesis draws up some important criteria that should be considered when optimising the flush performance.

The conclusion of this thesis is that implementation of Mimer SQL Real-time Edition would be beneficial for the Diagnostic Runtime Engine.

Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Palomeque, Carlos. "Real-Time Visualization of Construction Equipment Performance". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110903.

Texto completo
Resumen
This thesis is a proof-of-concept project that aims at modify and reuse existing communication protocols of wireless vehicle to vehicle communication in order to build a prototype of a real time graphical application that runs in an embedded environment. The application is a 2D visualization of the flow of material at a quarry and is built on top of existing communication protocols that enable wireless vehicle to vehicle communication according to the 802.11p standard for intelligent transport solutions. These communication protocols have already been used within the Volvo group in other research rojects, but not in a context of a real-time graphical 2D visualization. The application runs on an ALIX embedded motherboard and combined with the necessary hardware represent one node that makes the communication network. The visualization monitors the position of every active node in the network and the flow of material between material locations and crusher that process the material at the quarry. The visualization is implemented in C/C++ using Qt 4.6.2 Graphics View framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fichten, Mark Alan y David Howard Jennings. "Meaningful real-time graphics workstation performance measurements". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/23298.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Furht, Borko, David Gluch y David Joseph. "PERFORMANCE MEASUREMENTS OF REAL-TIME COMPUTER SYSTEMS". International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613489.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The performance of general purpose computers is typically measured in terms of Millions of Instructions per Second (MIPS) or Millions of Floating-Point Operations per Second (MFLOPS). Standard benchmark programs such as Whetstone, Dhrystone, and Linpack typically measure CPU speed in a single-task environment. However, a computer may have high CPU performance, but poor real-time capabilities. Therefore there is a need for performance measures specifically intended for real-time computer systems. This paper presents four methodologies, related metrics and benchmarks for objectively measuring real-time performance: (a) Tri-Dimensional Measure, (b) Process Dispatch Latency Time, (c) Rhealstone Metric, and (d) Vanada Benchmark. This proposed methodologies and related measures are applied in the performance evaluation of several real-time computer systems, and the results obtained are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bihari, Thomas Edward. "Adapting real-time software for reliable performance /". The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487326511714772.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Amin, Issam. "Simulation and performance analysis of time-critical real-time LANs". Thesis, University of Sussex, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407752.

Texto completo
Resumen
Simulation and performance analysis of wired and wireless Time-Critical Real-Time(TCRT) LANs is the subject of this research work. Special emphasis has been placed on deterministic Medium Access Control (MAC) method as defined by the Institute of Electrical and Electronics Engineers (IEEE) 802.4 Token Passing Bus. Another very popular MAC method based on the IEEE 802.11 wireless LAN standard has also been investigated. The demands imposed by TCRT LANs are of a time-bounded nature which require messages to be delivered on time or within allowable delay defined by the applications they serve. A number of metrics for measuring network performance have been proposed for use throughout the thesis, specific important parameters, such as response time, waiting time, access time, and delay which greatly influence the performance of TCRT LANs have been analysed and examined in great detail. Fine tuning of these parameters was carried out to observe the influence they have on networks performance. Distinct approaches for measuring network performance are proposed and analysed. An analytical approach using mathematical models to determine network performance for different real-time process control applications is analysed and tested. The advantages and limitations of this approach are identified and evaluated for real-time applications. The second approach is modelling by simulation employing industry standards simulation tools, namely Network II.S and COMNET III. These simulation tools provide an effective platform for studying time-critical applications. Simulation models representing process control applications were created. A number of practical simulation models characterising real-time manufacturing cells have been modelled, analysed, and tested. Both simulation tools are used to model different network scenanos utilising the strengths and advantages of each. Simulation results and comparison of specific models were carried out. Network II.S is used to simulate IEEE 802.4 and COMNET III is used to simulate IEEE 802.11. A third approach based on an empirical network is investigated. Real data were collected and fed into a simulationmodel representing this practical network. Results from the simulation models were analysed and compared to evaluate the performance of the practical network and verify the simulation model. This cross-approach concept is found to be a very important way of studying performance of real-time LANs. A number of real-time network applications and scenarios representing process control applications were modelled using the various techniques. A generic network application was modelled to permit a comparison of the three methods. Most of the analyses are modelled using the simulation approach alone. This is due to the complexity and limitations involved in mathematically modelling dynamically changing situations in real-time applications. However, this approach was adopted only after having verified the correctness of the simulation models by cross referencing the results obtained from the mathematical and simulation approaches as applied to the generic (base) model. The simulation models enabled the analysis of the performance of IEEE 802.4 and IEEE 802.11 media access networks protocols used in real-time environments. Hypothetical and actual network scenarios were considered to fully investigate the effects of varying the various parameters on network performance. This research has clearly demonstrated that real-time networks impose different timing restrictions based on the applications they serve. Type of carried traffic by the real-time network plays a major role in influencing the choice of network protocols. Use of wireless network in real-time environment based on IEEE 802.11 under heavy load is ruled out under the current available proposals, however, they could be used in situation under low loading conditions serving small process control and manufacturing cells with limited number of processing elements. On the other hand, Real-time deterministic networks using access protocol based on IEEE 802.4 are found to be suitable for the most demanding network loading conditions and configurations. Simplifying the management functions of the IEEE 802.4 protocol reduces its complexity and costs of deployment without undermining its performance. This in tum will encourage more vendors to adopt the IEEE 802.4 standard for implementation in TCRT applications
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Djuric, Natasa. "Real-time supervision of building HVAC system performance". Doctoral thesis, Norwegian University of Science and Technology, Faculty of Engineering Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-2215.

Texto completo
Resumen

This thesis presents techniques for improving building HVAC system performance in existing buildings generated using simulation-based tools and real data. Therefore, one of the aims has been to research the needs and possibilities to assess and improve building HVAC system performance. In addition, this thesis aims at an advanced utilization of building energy management system (BEMS) and the provision of useful information to building operators using simulation tools.

Buildings are becoming more complex systems with many elements, while BEMS provide many data about the building systems. There are, however, many faults and issues in building performance, but there are legislative and cost-benefit forces induced by energy savings. Therefore, both BEMS and the computer-based tools have to be utilized more efficiently to improve building performance.

The thesis consists of four main parts that can be read separately. The first part explains the term commissioning and the commissioning tool work principal based on literature reviews. The second part presents practical experiences and issues introduced through the work on this study. The third part deals with the computer-based tools application in design and operation. This part is divided into two chapters. The first deals with improvement in the design, and the second deals with the improvement in the control strategies. The last part of the thesis gives several rules for fault diagnosis developed using simulation tools. In addition, this part aims at the practical explanation of the faults in the building HVAC systems.

The practical background for the thesis was obtained though two surveys. The first survey was carried out with the aim to find the commissioning targets in Norwegian building facilities. In that way, an overview of the most typical buildings, HVAC equipment, and their related problems was obtained. An on-site survey was carried out on an example building, which was beneficial for introducing the building maintenance structure and the real hydronic heating system faults.

Coupled simulation and optimization programs (EnergyPlus and GenOpt) were utilized for improving the building performances. These tools were used for improving the design and the control strategies in the HVAC systems. Buildings with a hydronic heating system were analyzed for the purpose of improving the design. Since there are issues in using the optimization tool, GenOpt, a few procedures for different practical problems have been suggested. The optimization results show that the choice of the optimization functions influences significantly the design parameters for the hydronic heating system.

Since building construction and equipment characteristics are changing over time, there is a need to find new control strategies which can meet the actual building demand. This problem has been also elaborated on by using EnergyPlus and GenOpt. The control strategies in two different HVAC systems were analyzed, including the hydronic heating system and the ventilation system with the recovery wheel. The developed approach for the strategy optimization includes: involving the optimization variables and the objective function and developing information flow for handling the optimization process.

The real data obtained from BEMS and the additional measurements have been utilized to explain faults in the hydronic heating system. To couple real data and the simple heat balance model, the procedure for the model calibration by use of an optimization algorithm has been developed. Using this model, three operating faults in the hydronic heating system have been elaborated.

Using the simulation tools EnergyPlus and TRNSYS, several fault detection and diagnosis (FDD) rules have been generated. The FDD rules were established in three steps: testing different faults, calculating the performance indices (PI), and classifying the observed PIs. These rules have been established for the air cooling system and the hydronic heating system. The rules can diagnose the control and the component faults. Finally, analyzing the causes and the effects of the tested faults, useful information for the building maintenance has been descriptively explained.

The most important conclusions are related to a practical connection of the real data and simulation-based tools. For a complete understanding of system faults, it is necessary to provide real-life information. Even though BEMS provides many building data, it was proven that BEMS is not completely utilized. Therefore, the control strategies can always be improved and tuned to the actual building demands using the simulation and optimization tools. It was proven that many different FDD rules for HVAC systems can be generated using the simulation tools. Therefore, these FDD rules can be used as manual instructions for the building operators or as a framework for the automated FDD algorithms.


Denne avhandlingen presenterer noen fremgangsmåter for forbedring av ytelser for VVS-tekniske anlegg i eksisterende bygninger basert på bruk av simuleringsverktøy og virkelige måledata. Ett av målene har vært å undersøke behov og muligheter for vurdering og forbedring av ytelser for VVS-anlegg i bygninger. I tillegg har denne avhandlingen hatt som mål å fremme bruk av SD-anlegg samt å fremskaffe nyttig informasjon til driftspersonalet.

Bygninger blir stadig mer kompliserte systemer som inneholder flere og flere komponenter mens SD-anlegg håndterer en stadig større mengde data fra bygningsinstallasjoner. På den ene siden registreres det ofte feil og problemer med hensyn til ytelsene til de VVS-tekniske installasjonene. På den andre siden innføres det stadig strengere lovmessige pålegg og kost-nyttekrav motivert i energieffektiviseringen. SD-anlegg og databaserte verktøy bør derfor brukes mer effektivt for forbedring av ytelsene.

Avhandlingen består av fire hoveddeler hvor hver del kan leses separat. Den første delen, som er basert på literatturstudie, forklarer funksjonskontroll som begrep og prinsipper for oppbygging av verktøy for funksjonskontroll. Den andre delen presenterer praktisk erfaring og problemstillinger utviklet og behandlet i løpet av arbeidet med avhandlingen. Den tredje delen handler om anvendelse av databaserte verktøy for forbedring av ytelsen for VVS-tekniske installasjoner. Den tredje delen er delt opp i to kapitler, hvorav et handler om forbedring av systemløsninger og et om forbedring av styringsstrategier. Den siste delen presenterer flere regler for feilsøking og diagnostisering utviklet gjennom bruk av simuleringsverktøy. I tillegg gir denne delen en praktisk forklaring av feilene i de VVS-anleggene som er behandlet i undersøkelsen.

Det praktiske grunnlaget for avhandlingen er etablert gjennom to undersøkelser. Den første var en spørreundersøkelse som hadde til hensikt å kartlegge målsetninger for funksjonskontroll i norske bygninger. Gjennom dette ble det etablert en oversikt over de mest typiske bygninger med tilhørende VVS-anlegg og de mest forekommende problemene. En dypere undersøkelse ble utført på ett casebygg. Denne undersøkelsen viste seg å være nyttig både for kartlegging av betydningen av organisering av driften av bygningen og for avdekking av de virkelige feilene i det vannbårne oppvarmingssystemet.

En kobling mellom et simulerings- og et optimaliseringsprogram (EnergyPlus og GenOpt) har vært benyttet for forbedring av ytelsene for de VVS-tekniske installasjonene. Disse verktøyene har vært brukt for forbedring av både systemløsningene og styringsstrategiene for VVS-anlegg. Bygninger med vannbåren oppvarmingssystem har vært analysert for å forbedre systemløsningen. På grunn av begrensninger i bruken av optimaliseringsverktøyet GenOpt, har det blitt utviklet egne prosedyrer for håndtering av visse typer problemstillinger hvor denne begrensningen opptrer. Resultatene for optimaliseringen viser at valg av objektfunksjoner påvirker betydelig parametrene i det vannbårne oppvarmingssystemet.

Endringer i egenskapene til både bygningskonstruksjoner og utstyr som skjer på grunn av aldring over tiden, gjør det nødvendig med tilpassning av styringsstrategier slik at det virkelige behovet kan bli dekket. Denne problemstillingen har vært analysert ved bruk av EnergyPlus og GenOpt. Styringsstrategiene for to forskjellige VVS-anlegg, et vannbåret oppvarmingssystem og et ventilasjonsanlegg med varmegjenvinner har blitt behandlet. Den utviklete prosedyren for optimalisering av styringsstrategien består av følgende steg: innføring av optimaliseringsvariabler og objektfunksjon, samt utvikling av informasjonsflyt for behandling av optimaliseringsprosessen.

De virkelige data, både fra SD-anlegg og tilleggsmålinger, har vært benyttet for praktisk forklaring av feilene i oppvarmingssystemet. En prosedyre for modellkalibrering basert på bruk av en optimaliseringsalgoritme som kobler sammen de virkelige data og en enkel varmebalansemodell har blitt foreslått. Tre konkrete driftsfeil i oppvarmingssystemet har blitt belyst gjennom bruk av denne varmebalansemodellen.

Flere regler for feilsøking og diagnostisering har blitt utviklet ved hjelp av simuleringsverktøyene EnergyPlus and TRNSYS. Denne utviklingen har bestått av tre ulike steg: testing av bestemte feil, beregning av ytelsesindikatorer og til slutt klassifisering av de observerte ytelsesindikatorer. Reglene har blitt utviklet for et system av aggregater for luftkjøling og for et vannbåret oppvarmingssystem. Reglene kan diagnostisere både styringsfeil og komponentfeil. Til slutt presenteres informasjon som er nyttig for drift av VVS-tekniske installasjoner i bygninger basert på en analyse av årsakene for og virkningene av de feil som er behandlet.

De viktigste konklusjonene er knyttet til praktisk kombinasjon av virkelige måleverdier og simuleringsverktøy. Informasjon fra det virkelig liv er helt nødvendig for å få en god forståelse av feil som oppstår i anlegg. Det er også vist at potensialet som ligger i alle de data som er tilgjengelige gjennom SD-anlegg, ikke er fullt utnyttet. Gjennom bruk av simuleringsverktøy kan styringsstrategiene alltid bli bedre tilpasset og innjustert til de virkelige behov i bygningen. Simuleringsverktøy kan også brukes for utvikling av prosedyrer for feilsøking og diagnostisering i VVS-tekniske anlegg. Disse prosedyrene kan brukes enten som en veileder for manuell feilsøking og detektering eller som grunnlag for utvikling av automatiserte algoritmer.


Paper II, VI and VII are reprinted with kind permission from Elsevier, sciencedirect.com
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Forsberg, Nils. "Evaluation of Real-Time Performance in Virtualized Environment". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-12402.

Texto completo
Resumen
In this report is documented the research, tests and conclusions of a thesis work with the aim of investigating the possibilities of running real-time tasks in a virtualization environment. First we introduce the reader to the concepts and technology we will be touching on, and then we investigate the available solutions. We find that most of these are merely in a theoretical or development stage, and so we evaluate them theoretically. We also attempt to test one of the solutions that are fully developed and available, but fail because of issues related to the design of the solution. Based on our experiences and evaluations we come to the conclusion that the solutions available are lacking, and we give a suggestion of our own that we think should address the issues we have found.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Paradis, Matthew Daniel Jean. "Mapping and interactivity in real-time musical performance". Thesis, University of York, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.727129.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Brooks, Gail Dean. "Data coverage performance evaluation for real-time systems". Master's thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-01202010-020017/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Hoang, Hoai. "Enhancing the Performance of Distributed Real-time Systems". Doctoral thesis, Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-1986.

Texto completo
Resumen
Advanced embedded systems can consist of many sensors, actuators and processors that are deployed on one or several boards, while having a demand of interacting with each other and sharing resources. Communication between different components usually has strict timing constraints. There is thus a strong need to provide solutions for time critical communication. This thesis focuses on both the support of real-time services over standard switched Ethernet networks and the improvement of systems' real-time characteristics, such as reducing delay and jitter in processors and on communication links. Switched Ethernet has been chosen in this work because of its major advantages in industry; it supports higher bit-rates than most other current LAN (Local Area Network) technologies, including field buses, still at a low cost. We propose using a star network topology with a single Ethernet switch. Each node is connected to a separate port of the switch via a full-duplex link, thereby eliminating collisions. A solid real-time communication protocol for switched Ethernet networks is proposed in the thesis, including a real-time layer between the Ethernet layer and the TCP/IP suite. The network has the capability of supporting both real-time and non real-time traffic and assuring adaptation to the surrounding protocol standards. Most embedded systems work in a dynamic environment, where the precise behavior of the network traffic can usually not be predicted. To support real-time services, we have chosen the Earliest Deadline scheduling algorithm (EDF) because of its optimality, high efficiency and suitability for being used in adaptive schemes. To be able to increase the amount of guaranteed real-time traffic, the notion of Asymmetric Deadline Partitioning Scheme (ADPS) is introduced. ADPS allows distribution of the end-to-end deadline of a message, sent from any source node in the network to any destination node via the switch, into two sub-deadlines, one for each hop according to the load of the physical link that it must traverse. For the EDF scheduling algorithm, the feasibility test is one of the most important techniques that provides us with information about whether or not the real-time traffic can be guaranteed by the network. With the same computational complexity as the feasibility test, a method has been developed to compute the minimum EDF-feasible deadline for a real-time task. The importance of this method in real-time applications lies in that it can be effectively used to reduce the response times of specific control activities or limit their input-output jitter. To allow more flexibility in the control of delay and jitter in real-time systems, a general approach for reducing task deadlines according to the requirements of individual tasks has been developed. The method allows the user to specify a deadline reduction factor for each task in order to better exploit the available slack according to the tasks' actual requirements.

Ingår även i serien: Technical report. D / Department of Computer Science and Engineering, Chalmers University of Technology, 1653-1787 ; 28

Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bono, John y Preston Hauck. "IMPROVING REAL-TIME LATENCY PERFORMANCE ON COTS ARCHITECTURES". International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606746.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Telemetry systems designed to support the current needs of mission-critical applications often have stringent real-time requirements. These systems must guarantee a maximum worst-case processing and response time when incoming data is received. These real-time tolerances continue to tighten as data rates increase. At the same time, end user requirements for COTS pricing efficiencies have forced many telemetry systems to now run on desktop operating systems like Windows or Unix. While these desktop operating systems offer advanced user interface capabilities, they cannot meet the realtime requirements of the many mission-critical telemetry applications. Furthermore, attempts to enhance desktop operating systems to support real-time constraints have met with only limited success. This paper presents a telemetry system architecture that offers real-time guarantees while at the same time extensively leveraging inexpensive COTS hardware and software components. This is accomplished by partitioning the telemetry system onto two processors. The first processor is a NetAcquire subsystem running a real-time operating system (RTOS). The second processor runs a desktop operating system running the user interface. The two processors are connected together with a high-speed Ethernet IP internetwork. This architecture affords an improvement of two orders of magnitude over the real-time performance of a standalone desktop operating system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Powell, Richard L., Gale L. Williamson, Farhand Razavian y Paul J. Friedman. "High Performance, Real-Time, Parallel Processing Telemetry System". International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615236.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
Flight test and signal and image processing systems have shown an increasingly voracious appetite for computer resources. Previous solutions employed special-purpose, bit-sliced technology to supplant costly general purpose computers. Although the hardware is less expensive and the throughput greater, the expense to develop or modify applications is very high. Recent parallel processor technology has increased capabilities, but the high applications development cost remains. Input/output (I/O) such as intermediate mass storage and display has been limited to transfer to general purpose or attached I/O computers. The PRO 550 Processing and Storage Subsystem of the System 500 was developed to provide linearly expandable, programmable real-time processing and an interface to distributed data acquisition subsystems. Each data acquisition subsystem can acquire data from multiple telemetry and other real-time sources. Processing resources are provided by one or more 8 MIPS (20 MFLOPS peak) processor modules, which utilize an array of predefined algorithms, algorithms specified by algebraic notation, or developed via high level languages (C and Fortran). Setup and program development occur on an external, general purpose color graphics workstation that is connected to the subsystem via an Ethernet network for command, control, and resultant data display. High-performance peripherals and processors communicate with each other via a 16-MHz broadcast bus, the MUXbus II, where any or all devices can acquire data elements called tokens. A token is a single MUXbus II word of 32 bits of data and a 16-bit tag to identify the word uniquely to the acquiring modules. The output of each device to the bus can be one or more tokens, but each device captures the bus to insert a single token. This ensures all devices receive equal priority and the MUXbus II is maximally utilized. This multiple instruction, multiple data (MIMD) architecture automatically schedules and routes data to processors or to I/O modules without control processor overhead. Traditional peripherals and administrative functions utilize the second subsystem bus, which is a traditional VMEbus. It controls the high performance devices while permitting the utilization of standard off-the-shelf controllers (e.g., magnetic tape, Ethernet, and bus controllers) for less demanding I/O tasks. A dedicated Bridge Module is the gateway for moving data between bus domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Sánchez, Lorenzo Manuel Antonio. "Techniques for performance based, real-time facial animation". Thesis, University of Sheffield, 2006. http://etheses.whiterose.ac.uk/14897/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Struhar, Vaclav. "Improving Soft Real-time Performance of Fog Computing". Licentiate thesis, Mälardalens högskola, Inbyggda system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55679.

Texto completo
Resumen
Fog computing is a distributed computing paradigm that brings data processing from remote cloud data centers into the vicinity of the edge of the network. The computation is performed closer to the source of the data, and thus it decreases the time unpredictability of cloud computing that stems from (i) the computation in shared multi-tenant remote data centers, and (ii) long distance data transfers between the source of the data and the data centers. The computation in fog computing provides fast response times and enables latency sensitive applications. However, industrial systems require time-bounded response times, also denoted as RT. The correctness of such systems depends not only on the logical results of the computations but also on the physical time instant at which these results are produced. Time-bounded responses in fog computing are attributed to two main aspects: computation and communication.    In this thesis, we explore both aspects targeting soft RT applications in fog computing in which the usefulness of the produced computational results degrades with real-time requirements violations. With regards to the computation, we provide a systematic literature survey on a novel lightweight RT container-based virtualization that ensures spatial and temporal isolation of co-located applications. Subsequently, we utilize a mechanism enabling RT container-based virtualization and propose a solution for orchestrating RT containers in a distributed environment. Concerning the communication aspect, we propose a solution for a dynamic bandwidth distribution in virtualized networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

UMBERG, KATHERINE A. "PERFORMANCE EVALUATION OF REAL-TIME EVENT DETECTION ALGORITHMS". University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1155777781.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Jenkins, William George. "Real-time vehicle performance monitoring with data integrity". Master's thesis, Mississippi State : Mississippi State University, 2006. http://sun.library.msstate.edu/ETD-db/ETD-browse/browse.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Choi, Hyongjun. "Definitions of performance indicators in real-time and lapsed-time analysis in performance analysis of sports". Thesis, Cardiff Metropolitan University, 2008. http://hdl.handle.net/10369/4369.

Texto completo
Resumen
Performance analysis is an objective method of gathering the data of performance, and generally transforms these observations into numerical data. Performance indicators, as well as a selection or elements of sucessful outcome, have often been used in order to feedback augmented information in performance analysis systems, but they have rarely been considered within the classification of performance analysis systems based on timing of analysis and feedback. The main aim of this study is to investigate performance indicators used within real-time and lapsed time systems so that the definitions of the performance indicators, the effectiveness of the performance indicators, their reliability and validity within real time analysis systems can be analyzed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Furht, B., A. Boujarwah, D. Gluch, D. Joseph, D. Kamath, P. Matthews, M. McCarty, R. Stoehr y R. Sureswaran. "A TOOL FOR PERFORMANCE EVALUATION OF REAL-TIME UNIX OPERATING SYSTEMS". International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612926.

Texto completo
Resumen
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In this paper we present the REAL/STONE Real-Time Tester, a tool for performance evaluation of real-time UNIX operating systems. The REAL/STONE Real-Time Tester is a synthetic benchmark that simulates a typical real-time environment. The tool performs typical real-time operations, such as: (a) reads data from an external source and accesses it periodically, (b) processes data through a number of real-time processes, and © displays the final data. This study can help users in selecting the most-effective real-time UNIX operating system for a given application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Grelsson, David. "Tile Based Procedural Terrain Generation in Real-Time : A Study in Performance". Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5409.

Texto completo
Resumen
Context. Procedural Terrain Generation refers to the algorithmical creation of terrains with limited or no user input. Terrains are an important piece of content in many video games and other forms of simulations. Objectives. In this study a tile-based approach to creating endless terrains is investigated. The aim is to find if real-time performance is possible using the proposed method and possible performance increases from utilization of the GPU. Methods. An application that allows the user to walk around on a seemingly endless terrain is created in two versions, one that exclusively utilizes the CPU and one that utilizes both CPU and GPU. An experiment is then conducted that measures performance of both versions of the application. Results. Results showed that real-time performance is indeed possible for smaller tile sizes on the CPU. They also showed that the application benefits significantly from utilizing the GPU. Conclusions. It is concluded that the tile-based approach works well and creates a functional terrain. However performance is too poor for the technique to be utilized in e.g. a video game.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Knutsson, Tobias. "Performance Evaluation of GNU/Linux for Real-Time Applications". Thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-88748.

Texto completo
Resumen

GNU/Linux systems have become strong competitors in the embedded real-time systems segment. Many companies are beginning to see the advantages with using free software. As a result, the demand to provide systems based on the Linux kernel has soared. The problem is that there are many ways of achieving real-time performance in GNU/Linux. This report evaluates some of the currently available alternatives. Using Xenomai, the PREEMPT_RT patch and the mainline Linux kernel, different approaches to real-time GNU/Linux are compared by measuring their interrupt and scheduling latency. The measurements are performed with the self-developed Tennis Test Tool on an Intel XScale based Computer-On-Module with 128MB of RAM, running at 520MHz. The test results show that Xenomai maintains short response times of 58ms and 76ms with regard to interrupt and scheduling latencies respectively, even during heavy load of the Linux domain. When the Xenomai domain is loaded as well, responsiveness drops to 247ms for interrupt latency and 271ms for scheduling latency, making it a dead race between Xenomai and the PREEMPT_RT patched kernel. The mainline kernel performs very well when not subjected to any workload. In the tests with more load applied, performance deteriorates fast with resulting latencies of over 12ms.

Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Duvall, David C. "Real-time MIDI performance evaluation for beginning piano students". Connect to this title online, 2008. http://etd.lib.clemson.edu/documents/1219869971/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Smeds, Kristofer S. "High-performance real-time motion control for precision systems". Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/34020.

Texto completo
Resumen
Digital motion controllers are used almost exclusively in automated motion control systems today. Their key performance parameters are controller execution speed, timing consistency, and data accuracy. Most commercially available controllers can achieve sampling rates up to 20kHz with several microseconds of timing variation between control cycles. A few state-of-the art control platforms can reach sampling rates of around 100kHz with several hundred nanoseconds of timing variation. There exist a growing number of emerging high-speed high-precision applications, such as diamond turning and scanning probe microscopy, that can benefit from digital controllers capable of faster sampling rates, more consistent timing, and higher data accuracy. This thesis presents two areas of research intended to increase the capabilities of digital motion controllers to meet the needs of these high-speed high-precision applications. First, it presents a new high-performance real-time multiprocessor control platform capable of 1MHz control sampling rates with less than 6ns RMS control cycle timing variation and 16-bit data acquisition accuracy. This platform also includes software libraries to integrate it with Simulink for rapid controller development and LabVIEW for easy graphical user interface development. This thesis covers the design of the control platform and experimentally demonstrates it as a motion controller for a fast-tool servo machine tool. Second, this thesis investigates the effect of control cycle timing variations (sampling jitter and control jitter) on control performance, with an emphasis on precision positioning degradation. A new approximate discrete model is developed to capture the effects of jitter, enabling an intuitive understanding of it's effects on the control system. Based on this model, analyses are carried out to determine the relationship between jitter and positioning error for two scenarios: regulation error from jitter's interaction with measurement noise; and tracking error from jitter's interaction with a deterministic reference command. Further, several practical methods to mitigate the positioning degradation due to jitter are discussed, including a new jitter compensator that can be easily added to an existing controller. Through simulations and experiments performed on a fast-tool servo machine tool, the model and analyses are validated and the positioning degradation arising from jitter is clearly demonstrated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Asadi, Nima. "Enhancing the Monitoring of Real-Time Performance in Linux". Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-24798.

Texto completo
Resumen
There is a growing trend in applying Linux operating system in the domain of embeddedsystems. This is due to the important features that Linux benets from, such as beingopen source, its light weight compared to other major operating systems, its adaptabilityto dierent platforms, and its more stable performance speed. However, there are up-grades that still need to be done in order to use Linux for real-time purposes. A numberof dierent approaches have been suggested in order to improve Linux's performance inreal-time environment. Nevertheless, proposing a correct-by-construction system is verydicult in real-time environment, mainly due to the complexity and unpredictability ofthem. Thus, run-time monitoring can be a helpful approach in order to provide the userwith data regarding to the actual timing behavior of the system which can be used foranalysis and modication of it. In this thesis work, a design for run-time monitoringis suggested and implemented on a real-time scheduler module that assists Linux withreal-time tasks. Besides providing crucial data regarding the timing performance of thesystem, this monitor predicts violations of timing requirements based on the currenttrace of the system performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

McHenry, John. "Performance evaluation of multicomputer networks for real-time computing". Thesis, Virginia Tech, 1990. http://hdl.handle.net/10919/42085.

Texto completo
Resumen
Real-time constraints place additional limitations on distributed memory computing systems. Message passing delay variance and maximum message delay are important aspects of such systems that are often neglected by performance studies. This thesis examines the performance of the spanning bus hypercube, dual bus hypercube, and torus topologies to understand their desirable characteristics for real-time systems. FIFO, TDM, and token passing link access protocols and several queueing priorities are studied to measure their effect on the system’s performance. Finally, the contribution of the message parameters to the overall system delay is discussed. Existing analytic models are extended to study delay variance and maximum delay in addition to mean delay. These models separate the effects of node and link congestion, and thus provide a more accurate method for studying multicomputer networks. The SLAM simulation language substantiates results obtained analytically for the mean and variance of message delay for the FIFO link access protocol, as well as providing a method for measuring the message delay for the other link access protocols and queueing priorities. Both analytic and simulation results for the various topologies, protocols, priorities, and message parameters are presented.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Kim, Hyoseung. "Towards Predictable Real-Time Performance on Multi-Core Platforms". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/836.

Texto completo
Resumen
Cyber-physical systems (CPS) integrate sensing, computing, communication and actuation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12 commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation. Our proposed techniques theoretically bound temporal interference that tasks may suffer from when accessing shared resources. Our techniques also involve software primitives and algorithms for real-time operating systems and hypervisors, which significantly reduce the degree of the temporal interference. Specifically, we tackle the issues of cache and memory contention, locking and synchronization, interrupt handling, and access control for computational accelerators such as general-purpose graphics processing units (GPGPUs), all of which are crucial to achieving predictable real-time performance on a modern multi-core platform. Our solutions are readily applicable to commodity multi-core platforms, and can be used not only for developing new systems but also migrating existing applications from single-core to multi-core platforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Winnemöller, Holger. "Implementing non-photorealistic rendreing enhancements with real-time performance". Thesis, Rhodes University, 2002. http://hdl.handle.net/10962/d1003135.

Texto completo
Resumen
We describe quality and performance enhancements, which work in real-time, to all well-known Non-photorealistic (NPR) rendering styles for use in an interactive context. These include Comic rendering, Sketch rendering, Hatching and Painterly rendering, but we also attempt and justify a widening of the established definition of what is considered NPR. In the individual Chapters, we identify typical stylistic elements of the different NPR styles. We list problems that need to be solved in order to implement the various renderers. Standard solutions available in the literature are introduced and in all cases extended and optimised. In particular, we extend the lighting model of the comic renderer to include a specular component and introduce multiple inter-related but independent geometric approximations which greatly improve rendering performance. We implement two completely different solutions to random perturbation sketching, solve temporal coherence issues for coal sketching and find an unexpected use for 3D textures to implement hatch-shading. Textured brushes of painterly rendering are extended by properties such as stroke-direction and texture, motion, paint capacity, opacity and emission, making them more flexible and versatile. Brushes are also provided with a minimal amount of intelligence, so that they can help in maximising screen coverage of brushes. We furthermore devise a completely new NPR style, which we call super-realistic and show how sample images can be tweened in real-time to produce an image-based six degree-of-freedom renderer performing at roughly 450 frames per second. Performance values for our other renderers all lie between 10 and over 400 frames per second on homePC hardware, justifying our real-time claim. A large number of sample screen-shots, illustrations and animations demonstrate the visual fidelity of our rendered images. In essence, we successfully achieve our attempted goals of increasing the creative, expressive and communicative potential of individual NPR styles, increasing performance of most of them, adding original and interesting visual qualities, and exploring new techniques or existing ones in novel ways.
KMBT_363
Adobe Acrobat 9.54 Paper Capture Plug-in
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Manolache, Sorin. "Schedulability analysis of real-time systems with stochastic task execution times". Licentiate thesis, Linköping University, Linköping University, ESLAB - Embedded Systems Laboratory, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5730.

Texto completo
Resumen

Systems controlled by embedded computers become indispensable in our lives and can be found in avionics, automotive industry, home appliances, medicine, telecommunication industry, mecatronics, space industry, etc. Fast, accurate and flexible performance estimation tools giving feedback to the designer in every design phase are a vital part of a design process capable to produce high quality designs of such embedded systems.

In the past decade, the limitations of models considering fixed task execution times have been acknowledged for large application classes within soft real-time systems. A more realistic model considers the tasks having varying execution times with given probability distributions. No restriction has been imposed in this thesis on the particular type of these functions. Considering such a model, with specified task execution time probability distribution functions, an important performance indicator of the system is the expected deadline miss ratio of tasks or task graphs.

This thesis proposes two approaches for obtaining this indicator in an analytic way. The first is an exact one while the second approach provides an approximate solution trading accuracy for analysis speed. While the first approach can efficiently be applied to monoprocessor systems, it can handle only very small multi-processor applications because of complexity reasons. The second approach, however, can successfully handle realistic multiprocessor applications. Experiments show the efficiency of the proposed techniques.


Report code: LiU-Tek-Lic-2002:58.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Ryrstedt, Emmy. "Performance Testing and Response Time Validation of a Financial Real-Time Java Application". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215330.

Texto completo
Resumen
System performance determines how fast a system can deliver its services when it is exposed to different loads. In Real-time computing the system performance is a critical aspect, since the usefulness or correctness of a response from a real-time system depends not only on the content of the response, but also on when it is delivered. If the response is delivered to fast or to slow it is considered an error and the system might go into a bad state, even if the value of the response actually is correct. Even though timing is a crucial aspect in real-time computing, it is hard to find any established methods on how to measure and evaluate the performance of a real-time system in terms of timing. This report strives to contribute to development in this research area by describing a project that investigates how to scientifically measure and report the timing performance of a financial real-time Java application. During the project a tool is implemented in a foreign exchange system, that can perform time measurements of different components in the system at application level. Experiments with variations of input values are constructed and executed to validate the system performance during different loads, by analyzing the measurements. The results from the experiments gives a ranking of how much various factors impacts the performance of the system, and shows how it is possible to find threshold values and bottlenecks by studying the value distributions and maximum values. The developed method can be used to compare the performance effects of different factors and to compare the system performance for different parameter values. The method shows to be a useful way to measure and validate the performance of a financial real-time Java application.
Systemprestandan bestämmer hur snabbt ett system kan leverera sina tjänster när det utsätts för olika belastningar. Vid realtidsberäkning är systemets prestanda en kritisk aspektav funktionaliteten, eftersom nyttan av ett svar från ett realtidssystem inte bara beror på svarets innehåll utan även när det levereras. Trots att timing är en viktig aspekt i realtidssystem är det svårt att hitta några etablerade metoder för hur man mäter och utvärderar prestandan hos ett realtidssystem när det gäller timing. Denna rapport strävar efter att bidra till utvecklingen inom detta forskningsområdegenom att beskriva ett projekt som undersöker hur man på ett vetenskapligt sätt kanmäta och rapportera tidsprestandan för en finansiell realtids Java-applikation. Under projektet implementeras ett verktyg i ett valutahandelssystem som på applikationsnivå utför tidsmätningar av olika komponenter i systemet. Experiment med variationer av inmatningsvärden konstrueras och exekveras för att validera systemets prestanda under olika belastningar, genom att analysera resultaten från tidsmätningarna. Resultaten från experimenten ger en rangordning av hur olika faktorer påverkar systemetsprestanda, och visar hur man kan hitta gränsvärden och flaskhalsar i systemet, genom att studera hur värdena var distribuerade och dess maximum värden. Den utvecklade metoden kan användas för att jämföra prestandaeffekterna av olika faktorer och för att jämföra systemets prestanda med olika parametervärden. Metoden visar sig vara ett användbart sätt att mäta och validera prestandan hos en finansiell realtids Java-applikation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Jonsson, Magnus. "Fiber-Optic Interconnections in High-Performance Real-Time Computer Systems". Licentiate thesis, Halmstad University, Embedded Systems (CERES), 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-3077.

Texto completo
Resumen

Future parallel computer systems for embedded real-time applications,where each node in itself can be a parallel computer, are predicted to havevery high bandwidth demands on the interconnection network. Otherimportant properties are time-deterministic latency and guarantees to meetdeadlines. In this thesis, a fiber-optic passive optical star network with amedium access protocol for packet switched communication in distributedreal-time systems is proposed. By using WDM (Wavelength DivisionMultiplexing), multiple channels, each with a capacity of several Gb/s, areobtained.

A number of protocols for WDM star networks have recently been proposed.However, the area of real-time protocols for these networks is quiteunexplored. The protocol proposed in this thesis is based on TDMA (TimeDivision Multiple Access) and uses a new distributed slot-allocationalgorithm with real-time properties. Services for both guarantee-seekingmessages and best-effort messages are supported for single destination,multicast, and broadcast transmission. Slot reserving can be used toincrease the time-deterministic bandwidth, while still having an efficientbandwidth utilization due to a simple slot release method.

By connecting several clusters of the proposed WDM star network by abackbone star, thus forming a star-of-stars network, we get a modular andscalable high-bandwidth network. The deterministic properties of thenetwork are theoretically analyzed for both intra-cluster and inter-clustercommunication, and computer simulations of intra-cluster communicationare reported. Also, an overview of high-performance fiber-opticcommunication systems is presented.

Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Thekkilakattil, Abhilash. "Resource Augmentation for Performance Guarantees in Embedded Real-time Systems". Licentiate thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16092.

Texto completo
Resumen
Real-time scheduling policies have been widely studied, with many known schedulability and feasibility analysis techniques for different task models, that have advanced the state-of-the-art. Most of these techniques are typically derived under the assumption of negligible runtime overheads which may not be realistic for modern embedded real-time systems, and hence potentially compromises the guarantees on their correct behaviors. This calls for methods to reason about the functioning of the system under the presence of such overheads as well as to predictably control them. Controlling these overheads may place additional performance demands, consequently requiring more resources such as faster processors. At the same time, the need for energy efficiency in these class of systems further complicates the problem and necessitates a holistic approach. In this thesis, we apply resource augmentation, viz., processor speed-up, to guarantee desired real-time properties even under the presence of runtime overheads. We specifically consider preemptions and faults that, at runtime, manifest as overheads in the system in various ways. Our aim is to provide specified non-preemption and fault tolerance feasibility guarantees in a real-time system. We first propose offline and online methods, that uses CPU frequency scaling, to control the number of preemptions in periodic and sporadic task systems, under a preemptive Fixed Priority Scheduling (FPS) policy. Furthermore, we derive the resource augmentation bound, specifically the upper-bound on the lowest processor speed, that guarantees the feasibility of a specified non-preemption behavior for any real-time task. We show that, for any task Ti , the resource augmentation bound that guarantees a non- reemptive execution for a specified duration Li , is given by 4Li/Dmin, where Dmin  is the shortest deadline in the task set. Consequently, we show that the upper-bound on the lowest processor speed that guarantees the feasibility of a non-preemptive schedule for the task set is 4Cmax/Dmin, where Cmax  is the largest execution time in the task set. We then propose a method to guarantee specified upper-bounds on the preemption related overheads in the schedule. We first translate the requirements of meeting specified upper-bounds on the preemption related overheads to a set of non-preemption requirements for the task set. The resource augmentation bound in conjunction with a sensitivity analysis is used to calculate the optimal processor speed that guarantees the derived non-preemption requirements, achieving the specified bounds on the preemption related costs. Finally, we derive the resource augmentation bound that guarantees the fault tolerance feasibility of a set of real-time tasks under an error burst of known length. We show that if the error burst length is no longer than half the shortest deadline in the task set, the resource augmentation bound that guarantees fault tolerance feasibility is 6.  Our contribution bounds the extra resources, specifically the required processor speed-up, that provides specified non-preemption and fault tolerance feasibility guarantees in a real-time system. It allows us to quantify the 'goodness' of non-preemptive scheduling, referred to as its sub-optimality, as compared to an optimal uni-processor scheduling algorithm, in terms of the required processor speed-up that guarantees a non-preemptive schedule for any uni-processor feasible task set. We intend to extend this work to provide non-preemption and fault tolerance feasibility guarantees in multi-processor systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Petersson, Tommy y Marcus Lindeberg. "Performance aspects of layered displacement blending in real time applications". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3542.

Texto completo
Resumen
The purpose of this thesis is to investigate performance aspects of layered displacement blending; a technique used to render realistic and transformable objects in real time rendering systems using the GPU. Layered displacement blending is done by blending layers of color maps and displacement maps together based on values stored in an influence map. In this thesis we construct a theoretical and practical model for layered displacement blending. The model is implemented in a test bed application to enable measuring of performance aspects. The implementation is fed input with variations in triangle count, number of subdivisions, texture size and number of layers. The execution time for these different combinations are recorded and analyzed. The recorded execution times reveal that the amount of layers associated with an object has no impact on performance. Further analysis reveals that layered displacement blending is heavily dependent on the triangle count in the input mesh. The results show that layered displacement blending is a viable option to representing transformable objects in real time applications with respect to performance. This thesis provides; a theoretical model for layered displacement blending, an implementation of the model using the GPU and measurements of that implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ali, Majid. "Improving the Adaptive Context Views and Evaluate Real-Time Performance". Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-20619.

Texto completo
Resumen
The versatility and dimension of smart phone applications is   increasing at magnificent rate and getting more and more advanced in a level that could solve complicated real time tasks. One of the important factors for such advancement has been the powerful sensors embedded on a Smartphone devices and sensory networks. Moreover, Context and Context-awareness would have remained a myth without the advent of sensors. The objective of this thesis has been to contribute to the research work carried out under the MediaSense project. Accordingly, the ultimate purpose of the thesis has been to evaluate and study the feasibility of the adaptive context view proposed in MediaSense Platform. In precise words, the thesis has done three core tasks. Firstly, the theoretical presentation of related works and the significance of the research question have been discussed through various social applications. Secondly, a proof-of-concept application has been developed to simulate what has been proposed in the research work. Finally, Android application has been designed and implemented in order to evaluate and study the techniques presented in a practical scenario. Moreover, in the android application known as SundsvallBIGBuddies, we have used the extensions designed for the existing MediaSense platform. The impact of using Android app relaying on a continuous stream of context data has been presented using graphs and tables.  In order to study the impact we used smart phone and tablets from Samsung.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Swenson, Rick L. "A real-time high performance universal colour transformation hardware system". Thesis, University of Kent, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342140.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Case, Steven V. "Performance Modeling of Asynchronous Real-time Communication Within Bluetooth Networks". NSUWorks, 2003. http://nsuworks.nova.edu/gscis_etd/446.

Texto completo
Resumen
This research provides an advance in the application of wireless, ad hoc networks to the domain of distributed, real-time applications. Traditionally, wireless communications are not deployed within real-time systems, the attributes of wireless protocols tend to run counter to the temporal requirements of real-time systems. When wireless protocols have been used in real-time systems, the application tends to be limited to systems for which there exists a priori knowledge of the network structure or the network communication. This research provides a model (or methodology) for evaluating the extent by which Bluetooth supports deterministic communication, thus allowing system engineers the ability to validate Bluetooth's ability to support real-time deadlines within software applications based on asynchronous communication. This research consisted primarily of an evaluation of the applicability of Bluetooth protocols to asynchronous real-time communication. The research methodology consisted of three distinct stages of research and development. The first stage of the study comprised the development of an analytical model describing the expected behavior of Bluetooth's ACL transmissions and the ability of ACL data packets to meet real-time deadlines. During the second stage of the study, the focus turned to the implementation of the Bluetooth HCI and L2CAP protocol layers. This implementation served as a test harness to gather actual performance data using commercial Bluetooth radios. The final stage of the study consisted of a comparative analysis of the predicted behavior established during the first stage of the study and the actual behavior experienced using the Bluetooth implementation from the second stage of the study. The analysis demonstrates the effectiveness of the model (from the first stage) by measuring the model's ability to accurately predict a piconet's ability to meet real-time deadlines for asynchronous communication when measured at the HCI-L2CAP protocol layer boundary.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Komsul, Muhammed Ziya. "Real time and performance management techniques in SSD storage systems". Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/39144.

Texto completo
Resumen
Flash-based storage systems offer high density, robustness, and reliability for embedded applications; however the physical nature of flash memory means that there are limitations to its usage in high reliability applications. To increase the reliability of flash-based storage systems, several RAID mechanisms have been proposed. However, these mechanisms permit the recovery of data onto a new replacement device when a particular device in the array reaches its endurance limit and they need regular garbage collection to efficiently manage free resources. These present concerns with response time as when a garbage collector or a device replacement is underway, the flash memory cannot be used by the application layer for an uncertain period of time. This non-determinism in terms of response time is problematic in high reliability systems that require real-time guarantees. Existing solutions to garbage collection only consider single flash chip but ignore architectures where multiple flash memories are used in a storage system such as RAID. Traditional replacement mechanisms based on magnetic storage mediums do not suit specifications of flash memory. The aim of this thesis is to improve the reliability of the SSD RAID mechanisms by providing guaranteed access time for hard real-time embedded applications. Investigating the hypothesis, a number of novel mechanisms were proposed with the goal of enhancing data reliability in an SSD array. Two novel mechanisms solve the non-determinism problem caused by garbage collection without disturbing the reliability mechanism unlike existing techniques. The third mechanism is device replacement techniques for replacing elements in the array, increasing system dependability by providing continuous system availability with higher I/O performance for hard real-time embedded applications. A global flash translation layer with novel garbage collection mechanisms, on-line device replacement techniques, and their associated controllers are implemented on our FPGA SSD RAID controller. Contrary to traditional approaches, a dynamic preemptive cleaning mechanism adopts a dynamic cleaning feature which does not disturb the reliability mechanism. In addition to this the garbage collection aware RAID mechanism is introduced to improve the maximum response time of the system further. On-line device replacement techniques address limitations of the device replacement and thus provide more deterministic response times. The reliability, real-time and performance of these mechanisms via trace-driven simulator for number of synthetic and realistic traces are also evaluated. The contribution of this thesis is as follows: the presentation of novel mechanisms that enable the real-time support for RAID techniques in SSD devices, the development of a number of mechanisms that enhance the performance and reliability of flash-based storages, the implementation of these controllers, and the provision of a complete test bed for investigating these behaviours.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Bergstrand, Fredrik y Tobias Edqvist. "Performance Optimizing Priority Assignment in Embedded Soft Real-time Applications". Thesis, Linköpings universitet, Programvara och system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148437.

Texto completo
Resumen
Optimizing task priority assignments is a well-researched area in the context of hard real-time systems, where the goal in the majority of cases is to produce a priority assignment that results in a schedulable task set. The problem has also been considered, albeit not to the same extent, in the soft real-time context where quality of service metrics determine the overall performance of systems. Previous research on the problem in the soft real-time context often resorts to some analytical approach, with the drawback of having to put relatively strict constraints on the system models to avoid excessively complex analysis computations. As a consequence, many attributes of a real system have to be omitted, and features such as multi-processor hardware platforms might make the analytical approach unfeasible due to complexity issues. In this thesis we took a different approach to the problem and used discrete event simulation to drive the priority assignment optimization process, which enabled more complex system models at the cost of increased objective function evaluation times. A latency-related quality of service metric was used as the objective function in a tabu search based optimization heuristic. Improvements were observed in both simulation and in the real system that was modeled. The results show that the model successfully captured key attributes of the modeled system, and that the discrete event simulation approach is a viable option when the goal is to improve or determine the quality of service of a soft real-time application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Alorf, Abdulaziz Abdullah. "Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time Classifiers". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96942.

Texto completo
Resumen
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. In this dissertation, we proposed a real-time model for classifying 40 facial attributes, which preprocesses faces and then extracts 7 types of classical and deep features. These features were fused together to train 3 different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. We also developed a real-time model for classifying the states of human eyes and mouth (open/closed), and the presence/absence of eyeglasses in the wild. Our method begins by preprocessing a face by cropping the regions of interest (ROIs), and then describing them using RootSIFT features. These features were used to train a nonlinear support vector machine for each attribute. Our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear (called igal) along with its detector. Our proposed idea was to detect the igal using a linear multiscale SVM classifier with a HOG descriptor. Thereafter, false positives were discarded using dense SIFT filtering, bag-of-visual-words decomposition, and nonlinear SVM classification. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications.
Doctor of Philosophy
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. Faces probably represent the most accurate biometric trait in our daily interactions. Thereby, it is not singular that so much effort from computer vision researchers have been invested in the analysis of faces. The automatic detection and analysis of faces within images has therefore received much attention in recent years. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. Soft biometrics have many uses in the field of biometrics such as (1) they can be utilized in a fusion framework to strengthen the performance of a primary biometric system. For example, fusing a face with voice accent information can boost the performance of the face recognition. (2) They also can be used to create qualitative descriptions about a person, such as being an "old bald male wearing a necktie and eyeglasses." Face detection and facial attribute classification are not easy problems because of many factors, such as image orientation, pose variation, clutter, facial expressions, occlusion, and illumination, among others. In this dissertation, we introduced novel techniques to classify more than 40 facial attributes in real-time. Our techniques followed the general facial attribute classification pipeline, which begins by detecting a face and ends by classifying facial attributes. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. The new facial attribute were fused with a face detector to improve the detection performance. In addition, we proposed a new method to evaluate the robustness of face detection, which is the first process in the facial attribute classification pipeline. Detecting the states of human facial attributes in real time is highly desired by many applications. For example, the real-time detection of a driver's eye state (open/closed) can prevent severe accidents. These systems are usually called driver drowsiness detection systems. For classifying 40 facial attributes, we proposed a real-time model that preprocesses faces by localizing facial landmarks to normalize faces, and then crop them based on the intended attribute. The face was cropped only if the intended attribute is inside the face region. After that, 7 types of classical and deep features were extracted from the preprocessed faces. Lastly, these 7 types of feature sets were fused together to train three different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. It also achieved state-of-the-art performance in classifying 14 out of 40 attributes. We also developed a real-time model that classifies the states of three human facial attributes: (1) eyes (open/closed), (2) mouth (open/closed), and (3) eyeglasses (present/absent). Our proposed method consisted of six main steps: (1) In the beginning, we detected the human face. (2) Then we extracted the facial landmarks. (3) Thereafter, we normalized the face, based on the eye location, to the full frontal view. (4) We then extracted the regions of interest (i.e., the regions of the mouth, left eye, right eye, and eyeglasses). (5) We extracted low-level features from each region and then described them. (6) Finally, we learned a binary classifier for each attribute to classify it using the extracted features. Our developed model achieved 30 FPS with a CPU-only implementation, and our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. After that, we fused it with a face detector to improve the detection performance. The traditional Middle Eastern headwear that men usually wear consists of two parts: (1) the shemagh or keffiyeh, which is a scarf that covers the head and usually has checkered and pure white patterns, and (2) the igal, which is a band or cord worn on top of the shemagh to hold it in place. The shemagh causes many unwanted effects on the face; for example, it usually occludes some parts of the face and adds dark shadows, especially near the eyes. These effects substantially degrade the performance of face detection. To improve the detection of people who wear the traditional Middle Eastern headwear, we developed a model that can be used as a head detector or combined with current face detectors to improve their performance. Our igal detector consists of two main steps: (1) learning a binary classifier to detect the igal and (2) refining the classier by removing false positives. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. Biometric systems that use face detection suffer from huge performance fluctuation. For example, users of biometric surveillance systems that utilize face detection sometimes notice that state-of-the-art face detectors do not show good performance compared with outdated detectors. Although state-of-the-art face detectors are designed to work in the wild (i.e., no need to retrain, revalidate, and retest), they still heavily depend on the datasets they originally trained on. This condition in turn leads to variation in the detectors' performance when they are applied on a different dataset or environment. To overcome this problem, we developed a novel optics-based blur simulator that automatically introduces the diffraction blur at different image scales/magnifications. Then we evaluated different face detectors on the output images using different IoU thresholds. Users, in the beginning, choose their own values for these three settings and then run our model to produce the efficient face detector under the selected settings. That means our proposed model would enable users of biometric systems to pick the efficient face detector based on their system setup. Our results showed that sometimes outdated face detectors outperform state-of-the-art ones under certain settings and vice versa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kunert, Kristina. "Architectures and Protocols for Performance Improvements of Real-Time Networks". Doctoral thesis, Högskolan i Halmstad, Inbyggda system (CERES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-14082.

Texto completo
Resumen
When designing architectures and protocols for data traffic requiring real-time services, one of the major design goals is to guarantee that traffic deadlines can be met. However, many real-time applications also have additional requirements such as high throughput, high reliability, or energy efficiency. High-performance embedded systems communicating heterogeneous traffic with high bandwidth and strict timing requirements are in need of more efficient communication solutions, while wireless industrial applications, communicating control data, require support of reliability and guarantees of real-time predictability at the same time. To meet the requirements of high-performance embedded systems, this thesis work proposes two multi-wavelength high-speed passive optical networks. To enable reliable wireless industrial communications, a framework in­corporating carefully scheduled retransmissions is developed. All solutions are based on a single-hop star topology, predictable Medium Access Control algorithms and Earliest Deadline First scheduling, centrally controlled by a master node. Further, real-time schedulability analysis is used as admission control policy to provide delay guarantees for hard real-time traffic. For high-performance embedded systems an optical star network with an Arrayed Waveguide Grating placed in the centre is suggested. The design combines spatial wavelength re­use with fixed-tuned and tuneable transceivers in the end nodes, enabling simultaneous transmis­sion of both control and data traffic. This, in turn, permits efficient support of heterogeneous traf­fic with both hard and soft real-time constraints. By analyzing traffic dependencies in this mul­tichannel network, and adapting the real-time schedulability analysis to incorporate these traffic dependencies, a considerable increase of the possible guaranteed throughput for hard real-time traffic can be obtained. Most industrial applications require using existing standards such as IEEE 802.11 or IEEE 802.15.4 for interoperability and cost efficiency. However, these standards do not provide predict­able channel access, and thus real-time guarantees cannot be given. A framework is therefore de­veloped, combining transport layer retransmissions with real-time analysis admission control, which has been adapted to consider retransmissions. It can be placed on top of many underlying communication technologies, exemplified in our work by the two aforementioned wireless stan­dards. To enable a higher data rate than pure IEEE 802.15.4, but still maintaining its energy saving properties, two multichannel network architectures based on IEEE 802.15.4 and encompassing the framework are designed. The proposed architectures are evaluated in terms of reliability, utiliza­tion, delay, complexity, scalability and energy efficiency and it is concluded that performance is enhanced through redundancy in the time and frequency domains.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Scholz, Jason B. "Real-time performance estimation and optimizaton of digital communication links /". Title page, contents and abstract only, 1992. http://web4.library.adelaide.edu.au/theses/09PH/09phs368.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Sylve, Joseph T. "Towards Real-Time Volatile Memory Forensics: Frameworks, Methods, and Analysis". ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2359.

Texto completo
Resumen
Memory forensics (or memory analysis) is a relatively new approach to digital forensics that deals exclusively with the acquisition and analysis of volatile system memory. Because each function performed by an operating system must utilize system memory, analysis of this memory can often lead to a treasure trove of useful information for forensic analysts and incident responders. Today’s forensic investigators are often subject to large case backlogs, and incident responders must be able to quickly identify the source and cause of security breaches. In both these cases time is a critical factor. Unfortunately, today’s memory analysis tools can take many minutes or even hours to perform even simple analysis tasks. This problem will only become more prevalent as RAM prices continue to drop and systems with very large amounts of RAM become more common. Due to the volatile nature of data resident in system RAM it is also desirable for investigators to be able to access non-volatile copies of system RAM that may exist on a device’s hard drive. Such copies are often created by operating systems when a system is being suspended and placed into a power safe mode. This dissertation presents work on improving the speed of memory analysis and the access to non-volatile copies of system RAM. Specifically, we propose a novel memory analysis framework that can provide access to valuable artifacts orders of magnitude faster than existing tools. We also propose two new analysis techniques that can provide faster and more resilient access to important forensic artifacts. Further, we present the first analysis of the hibernation file format used in modern versions of Windows. This work allows access to evidence in non-volatile copies of system RAM that were not previously able to be analyzed. Finally, we propose future enhancements to our memory analysis framework that should address limitations with the current design. Taken together, this dissertation represents substantial work towards advancing the field of memory forensics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Godavarthi, Venkata Sridivya. "Determination of Single Pole Breaker Reclose Time and System Performance Using Real Time Simulation". ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2299.

Texto completo
Resumen
This thesis investigates single pole reclosing in series capacitor compensated line. An algorithm is developed to determine the optimal dead time required for single pole reclose of circuit breakers and to reduce the randomness of reclosing time. The algorithm considers conditions of system, fault, voltage zero crossing, arc, and IEEE C37.104-2012 standard de-ionization time. This study also addresses difficulties of single pole reclose operation such as over-voltages at the line, secondary arc extinguishing time, dead time, over-voltages across the series capacitor, and negative sequence current. The system performance is evaluated using a set of metrics based on those operation difficulties. Methods used in the industry such as shunt reactor with the neutral reactor, surge arrester, and MOV are modelled and simulated to capture their effect on the operation difficulties. Comparative analysis is made to rank the effectiveness of each element against difficulties in operating single pole reclosing of circuit breakers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Baldwin, Rusty Olen. "Improving the Real-time Performance of a Wireless Local Area Network". Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/28113.

Texto completo
Resumen
This research considers the transmission of real-time data within a wireless local area network (WLAN). Exact and approximate analytic network evaluation techniques are examined. The suitability of using a given technique in a particular situation is discussed. Simulation models are developed to study the performance of our protocol RT-MAC (real-time medium access control). RT-MAC is a novel, simple, and elegant MAC protocol for use in transmitting real-time data in point to point ad hoc WLAN. Our enhancement of IEEE 802.11, RT-MAC, achieves dramatic reductions in mean delay, missed deadlines, and packet collisions by selectively discarding packets and sharing station state information. For example, in a 50 station network with a normalized offered load of 0.7, mean delay is reduced from more than 14 seconds to less than 45 ms, late packets are reduced from 76% to less than 1%, and packet collisions are reduced from 36% to less than 1%. Stations using RT-MAC are interoperable with stations using IEEE 802.11. In networks with both RT-MAC and IEEE 802.11 stations, significant performance improvements were seen even when more than half of the stations in the network were not RT-MAC stations. The effect of the wireless channel and its impact on the ability of a WLAN to meet packet deadlines is evaluated. It is found that, in some cases, other factors such as the number of stations in the network and the offered load are more significant than the condition of the wireless channel. Regression models are developed from simulation data to predict network behavior in terms of throughput, mean delay, missed deadline ratio, and collision ratio. Telemetry, avionics, and packetized voice traffic models are considered. The applicability of this research is not limited to real-time wireless networks. Indeed, the collision reduction algorithm of RT-MAC is independent of the data being transported. Furthermore, RT-MAC would perform equally well in wired networks. Incorporating the results of this research into existing protocols will result in immediate and dramatic improvements in network performance.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Powell, Richard y Jeff Kuhn. "HARDWARE- VS. SOFTWARE-DRIVEN REAL-TIME DATA ACQUISITION". International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608291.

Texto completo
Resumen
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
There are two basic approaches to developing data acquisition systems. The first is to buy or develop acquisition hardware and to then write software to input, identify, and distribute the data for processing, display, storage, and output to a network. The second is to design a system that handles some or all of these tasks in hardware instead of software. This paper describes the differences between software-driven and hardware-driven system architectures as applied to real-time data acquisition systems. In explaining the characteristics of a hardware-driven system, a high-performance real-time bus system architecture developed by L-3 will be used as an example. This architecture removes the bottlenecks and unpredictability that can plague software-driven systems when applied to complex real-time data acquisition applications. It does this by handling the input, identification, routing, and distribution of acquired data without software intervention.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Ayata, Mesut. "Effect Of Some Software Design Patterns On Real Time Software Performance". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/2/12612001/index.pdf.

Texto completo
Resumen
In this thesis, effects of some software design patterns on real time software performance will be investigated. In real time systems, performance requirements are critical. Real time system developers usually use functional languages to meet the requirements. Using an object oriented language may be expected to reduce performance. However, if suitable software design patterns are applied carefully, the reduction in performance can be avoided. In this thesis, appropriate real time software performance metrics are selected and used to measure the performance of real time software systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Opie, Timothy Tristram. "Creation of a real-time granular synthesis instrument for live performance". Thesis, Queensland University of Technology, 2003. https://eprints.qut.edu.au/15865/1/Timothy_Opie_Thesis.pdf.

Texto completo
Resumen
This thesis explores how granular synthesis can be used in live performances. The early explorations of granular synthesis are first investigated, leading up to modern trends of electronic performance involving granular synthesis. Using this background it sets about to create a granular synthesis instrument that can be used for live performances in a range of different settings, from a computer quartet, to a flute duet. The instrument, an electronic fish called the poseidon, is documented from the creation and preparation stages right through to performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Opie, Timothy Tristram. "Creation of a Real-Time Granular Synthesis Instrument for Live Performance". Queensland University of Technology, 2003. http://eprints.qut.edu.au/15865/.

Texto completo
Resumen
This thesis explores how granular synthesis can be used in live performances. The early explorations of granular synthesis are first investigated, leading up to modern trends of electronic performance involving granular synthesis. Using this background it sets about to create a granular synthesis instrument that can be used for live performances in a range of different settings, from a computer quartet, to a flute duet. The instrument, an electronic fish called the poseidon, is documented from the creation and preparation stages right through to performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Yang, J. "The performance ecosystem : a model for music composition through real-time, interactive performance systems". Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.677856.

Texto completo
Resumen
This thesis and portfolio of compositions are the result of four years of research conducted through compositional practice and theoretical reflection. This thesis examines in some depth, what the art and craft of composition could be in the context of information age paradigms of communication and interaction. It provides an investigation into compositional approaches fitting to contemporary means of music-making. The focus of this project is composition through real-time, interactive performance systems, this subject is examined within the wider context of network performance, musical interaction and design, live electronics, live scoring, spatial consideration in composition, and new notational practices. This thesis presents the notion of a performance ecosystem, a ground from which a work of art can emerge through the act of performance. The performance ecosystem is conceived of as a self-generating environment that engages a process of genetic replication to expand the system in scope and complexity. Within these performance ecosystems, actions and interactions are generative, and the work is negotiated in real-time between multiple, independent yet interdependent actors. The product of this activity is not only the ensuing sounds, movements, and images that are created, but also the system, with all of its infrastructure and possibilities, and the performance act, as a combination of negotiations and explorations, through which each performer and experiencer partakes in a literal journey through the work. I believe that the performance ecosystem presents a satisfying framework for artistic creation in the context of contemporary constructs of creativity, thought, relationship and being and the way they are represented, experienced and engendered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía