Dissertations / Theses on the topic 'Software architecture – Reliability'

To see the other types of publications on this topic, follow the link: Software architecture – Reliability.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Software architecture – Reliability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Perugupalli, Ranganath. "Empirical assessment of architecture-based reliability of open-source software." Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=3677.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains x, 70 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 66-70).
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Liming Computer Science &amp Engineering Faculty of Engineering UNSW. "Software architecture evaluation for framework-based systems." Awarded by:University of New South Wales. Computer Science and Engineering, 2007. http://handle.unsw.edu.au/1959.4/28250.

Full text
Abstract:
Complex modern software is often built using existing application frameworks and middleware frameworks. These frameworks provide useful common services, while simultaneously imposing architectural rules and constraints. Existing software architecture evaluation methods do not explicitly consider the implications of these frameworks for software architecture. This research extends scenario-based architecture evaluation methods by incorporating framework-related information into different evaluation activities. I propose four techniques which target four different activities within a scenario-based architecture evaluation method. 1) Scenario development: A new technique was designed aiming to extract general scenarios and tactics from framework-related architectural patterns. The technique is intended to complement the current scenario development process. The feasibility of the technique was validated through a case study. Significant improvements of scenario quality were observed in a controlled experiment conducted by another colleague. 2) Architecture representation: A new metrics-driven technique was created to reconstruct software architecture in a just-in-time fashion. This technique was validated in a case study. This approach has significantly improved the efficiency of architecture representation in a complex environment. 3) Attribute specific analysis (performance only): A model-driven approach to performance measurement was applied by decoupling framework-specific information from performance testing requirements. This technique was validated on two platforms (J2EE and Web Services) through a number of case studies. This technique leads to the benchmark producing more representative measures of the eventual application. It reduces the complexity behind the load testing suite and framework-specific performance data collecting utilities. 4) Trade-off and sensitivity analysis: A new technique was designed seeking to improve the Analytical Hierarchical Process (AHP) for trade-off and sensitivity analysis during a framework selection process. This approach was validated in a case study using data from a commercial project. The approach can identify 1) trade-offs implied by an architecture alternative, along with the magnitude of these trade-offs. 2) the most critical decisions in the overall decision process 3) the sensitivity of the final decision and its capability for handling quality attribute priority changes.
APA, Harvard, Vancouver, ISO, and other styles
3

Dimitrov, Martin. "Architectural support for improving system hardware/software reliability." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4533.

Full text
Abstract:
It is a great challenge to build reliable computer systems with unreliable hardware and buggy software. On one hand, software bugs account for as much as 40% of system failures and incur high cost, an estimate of $59.5B a year, on the US economy. On the other hand, under the current trends of technology scaling, transient faults (also known as soft errors) in the underlying hardware are predicted to grow at least in proportion to the number of devices being integrated, which further exacerbates the problem of system reliability. We propose several methods to improve system reliability both in terms of detecting and correcting soft-errors as well as facilitating software debugging. In our first approach, we detect instruction-level anomalies during program execution. The anomalies can be used to detect and repair soft-errors, or can be reported to the programmer to aid software debugging. In our second approach, we improve anomaly detection for software debugging by detecting different types of anomalies as well as by removing false-positives. While the anomalies reported by our first two methods are helpful in debugging single-threaded programs, they do not address concurrency bugs in multi-threaded programs. In our third approach, we propose a new debugging primitive which exposes the non-deterministic behavior of parallel programs and facilitates the debugging process. Our idea is to generate a time-ordered trace of events such as function calls/returns and memory accesses in different threads. In our experience, exposing the time-ordered event information to the programmer is highly beneficial for reasoning about the root causes of concurrency bugs.
ID: 028916717; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 110-119).
Ph.D.
Doctorate
School of Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
4

Patel, Krutartha Computer Science &amp Engineering Faculty of Engineering UNSW. "Hardware-software design methods for security and reliability of MPSoCs." Awarded by:University of New South Wales. Computer Science & Engineering, 2009. http://handle.unsw.edu.au/1959.4/44854.

Full text
Abstract:
Security of a Multi-Processor System on Chip (MPSoC) is an emerging area of concern in embedded systems. MPSoC security is jeopardized by Code Injection attacks. Code Injection attacks, which are the most common types of software attacks, have plagued single processor systems. Design of MPSoCs must therefore incorporate security as one of the primary objectives. Code Injection attacks exploit vulnerabilities in \trusted" and legacy code. An architecture with a dedicated monitoring processor (MONITOR) is employed to simultaneously supervise the application processors on an MPSoC. The program code in the application processors is divided into basic blocks. The basic blocks in the application processors are statically instrumented with special instructions that allow communication with the MONITOR at runtime. The MONITOR verifies the execution of all the processors at runtime using control flow checks and either a timing or instruction count check. This thesis proposes a monitoring system called SOFTMON, a design methodology called SHIELD, a design flow called LOCS and an architectural framework called CUFFS for detecting Code Injection attacks. SOFTMON, a software monitoring system, uses a software algorithm in the MONITOR. SOFTMON incurs limited area overheads. However, the runtime performance overhead is quite high. SHIELD, an extension to the work in SOFTMON overcomes the limitation of high runtime overhead using a MONITOR that is predominantly hardware based. LOCS uses only one special instruction per basic block compared to two, as was the case in SOFTMON and SHIELD. Additionally, profile information is generated for all the basic blocks in all the application processors for the MPSoC designer to tune the design by increasing or decreasing the frequency of loop basic blocks. CUFFS detects attacks even without application processors communicating to the MONITOR. The SOFTMON, SHIELD and LOCS approaches can only detect attacks if the application processors communicate to the MONITOR. CUFFS relies on the exact number of instructions in basic blocks to determine an attack, rather than time-frame based measures used in SOFTMON, SHIELD and LOCS. The lowest runtime performance overhead was achieved by LOCS (worst case of 37.5%), while the SOFTMON monitoring system had the least amount of area overheads of about 25%. The CUFFS approach employed an active MONITOR and hence detected a greater range of attacks. The CUFFS framework also detects bit flip errors (reliability errors) in the control flow instructions of the application processors on an MPSoC. CUFFS can detect nearly 70% of all bit flip errors in the control flow instructions. Additionally, a modified CUFFS approach is proposed to ensure reliable inter-processor communication on an MPSoC. The modified CUFFS approach uses a hardware based checksum approach for reliable inter-processor communication and incurred a runtime performance overhead of up to 25% and negligible area overheads compared to CUFFS. Thus, the approaches proposed in this thesis equip an MPSoC designer with tools to embed security features during an MPSoC's design phase. Incorporating security measures at the processor design level provides security against software attacks in MPSoCs and incurs manageable runtime, area and code-size overheads.
APA, Harvard, Vancouver, ISO, and other styles
5

de, Silva Lakshitha R. "Towards controlling software architecture erosion through runtime conformance monitoring." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/5220.

Full text
Abstract:
The software architecture of a system is often used to guide and constrain its implementation. While the code structure of an initial implementation is likely to conform to its intended architecture, its dynamic properties cannot always be fully checked until deployment. Routine maintenance and changing requirements can also lead to a deployed system deviating from this architecture over time. Dynamic architecture conformance checking plays an important part in ensuring that software architectures and corresponding implementations stay consistent with one another throughout the software lifecycle. However, runtime conformance checking strategies often force changes to the software, demand tight coupling between the monitoring framework and application, impact performance, require manual intervention, and lack flexibility and extensibility, affecting their viability in practice. This thesis presents a dynamic conformance checking framework called PANDArch framework, which aims to address these issues. PANDArch is designed to be automated, pluggable, non-intrusive, performance-centric, extensible and tolerant of incomplete specifications. The thesis describes the concept and design principles behind PANDArch, and its current implementation, which uses an architecture description language to specify architectures and Java as the target language. The framework is evaluated using three open source software products of different types. The results suggest that dynamic architectural conformance checking with the proposed features may be a viable option in practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Brosch, Franz [Verfasser], and R. [Akademischer Betreuer] Reussner. "Integrated Software Architecture-Based Reliability Prediction for IT Systems / Franz Brosch ; Betreuer: R. Reussner." Karlsruhe : KIT-Bibliothek, 2012. http://d-nb.info/1024312801/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

König, Johan. "Analyzing Substation Automation System Reliability using Probabilistic Relational Models and Enterprise Architecture." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145006.

Full text
Abstract:
Modern society is unquestionably heavily reliant on supply of electricity. Hence, the power system is one of the important infrastructures for future growth. However, the power system of today was designed for a stable radial flow of electricity from large power plants to the customers and not for the type of changes it is presently being exposed to, like large scale integration of electric vehicles, wind power plants, residential photovoltaic systems etc. One aspect of power system control particular exposed to these changes is the design of power system control and protection functionality. Problems occur when the flow of electricity changes from a unidirectional radial flow to a bidirectional. Such an implication requires redesign of control and protection functionality as well as introduction of new information and communication technology (ICT). To make matters worse, the closer the interaction between the power system and the ICT systems the more complex the matter becomes from a reliability perspective. This problem is inherently cyber-physical, including everything from system software to power cables and transformers, rather than the traditional reliability concern of only focusing on power system components. The contribution of this thesis is a framework for reliability analysis, utilizing system modeling concepts that supports the industrial engineering issues that follow with the imple-mentation of modern substation automation systems. The framework is based on a Bayesian probabilistic analysis engine represented by Probabilistic Relational Models (PRMs) in com-bination with an Enterprise Architecture (EA) modeling formalism. The gradual development of the framework is demonstrated through a number of application scenarios based on substation automation system configurations. This thesis is a composite thesis consisting of seven papers. Paper 1 presents the framework combining EA, PRMs and Fault Tree Analysis (FTA). Paper 2 adds primary substation equipment as part of the framework. Paper 3 presents a mapping between modeling entities from the EA framework ArchiMate and substation automation system configuration objects from the IEC 61850 standard. Paper 4 introduces object definitions and relations in coherence with EA modeling formalism suitable for the purpose of the analysis framework. Paper 5 describes an extension of the analysis framework by adding logical operators to the probabilistic analysis engine. Paper 6 presents enhanced failure rates for software components by studying failure logs and an application of the framework to a utility substation automation system. Finally, Paper 7 describes the ability to utilize domain standards for coherent modeling of functions and their interrelations and an application of the framework utilizing software-tool support.

QC 20140505

APA, Harvard, Vancouver, ISO, and other styles
8

Hassan, Ahmed. "Mining Software Repositories to Assist Developers and Support Managers." Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1017.

Full text
Abstract:
This thesis explores mining the evolutionary history of a software system to support software developers and managers in their endeavors to build and maintain complex software systems. We introduce the idea of evolutionary extractors which are specialized extractors that can recover the history of software projects from software repositories, such as source control systems. The challenges faced in building C-REX, an evolutionary extractor for the C programming language, are discussed. We examine the use of source control systems in industry and the quality of the recovered C-REX data through a survey of several software practitioners. Using the data recovered by C-REX, we develop several approaches and techniques to assist developers and managers in their activities. We propose Source Sticky Notes to assist developers in understanding legacy software systems by attaching historical information to the dependency graph. We present the Development Replay approach to estimate the benefits of adopting new software maintenance tools by reenacting the development history. We propose the Top Ten List which assists managers in allocating testing resources to the subsystems that are most susceptible to have faults. To assist managers in improving the quality of their projects, we present a complexity metric which quantifies the complexity of the changes to the code instead of quantifying the complexity of the source code itself. All presented approaches are validated empirically using data from several large open source systems. The presented work highlights the benefits of transforming software repositories from static record keeping repositories to active repositories used by researchers to gain empirically based understanding of software development, and by software practitioners to predict, plan and understand various aspects of their project.
APA, Harvard, Vancouver, ISO, and other styles
9

Doudalis, Ioannis. "Hardware assisted memory checkpointing and applications in debugging and reliability." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42700.

Full text
Abstract:
The problems of software debugging and system reliability/availability are among the most challenging problems the computing industry is facing today, with direct impact on the development and operating costs of computing systems. A promising debugging technique that assists programmers identify and fix the causes of software bugs a lot more efficiently is bidirectional debugging, which enables the user to execute the program in "reverse", and a typical method used to recover a system after a fault is backwards error recovery, which restores the system to the last error-free state. Both reverse execution and backwards error recovery are enabled by creating memory checkpoints, which are used to restore the program/system to a prior point in time and re-execute until the point of interest. The checkpointing frequency is the primary factor that affects both the latency of reverse execution and the recovery time of the system; more frequent checkpoints reduce the necessary re-execution time. Frequent creation of checkpoints poses performance challenges, because of the increased number of memory reads and writes necessary for copying the modified system/program memory, and also because of software interventions, additional synchronization and I/O, etc., needed for creating a checkpoint. In this thesis I examine a number of different hardware accelerators, whose role is to create frequent memory checkpoints in the background, at minimal performance overheads. For the purpose of reverse execution, I propose the HARE and Euripus hardware checkpoint accelerators. HARE and Euripus create different types of checkpoints, and employ different methods for keeping track of the modified memory. As a result, HARE and Euripus have different hardware costs and provide different functionality which directly affects the latency of reverse execution. For improving the availability of the system, I propose the Kyma hardware accelerator. Kyma enables simultaneous creation of checkpoints at different frequencies, which allows the system to recover from multiple types of errors and tolerate variable error-detection latencies. The Kyma and Euripus hardware engines have similar architectures, but the functionality of the Kyma engine is optimized for further reducing the performance overheads and improving the reliability of the system. The functionality of the Kyma and Euripus engines can be combined into a unified accelerator that can serve the needs of both bidirectional debugging and system recovery.
APA, Harvard, Vancouver, ISO, and other styles
10

Mori, Fernando Maruyama. "Uma metodologia de desenvolvimento de diagnóstico guiado para veículos automotivos." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/981.

Full text
Abstract:
A utilização de ferramentas externas de diagnóstico guiado tem se tornado cada vez mais importante nas atividades de pós-venda da indústria automotiva. Isso se dá principalmente devido ao uso extensivo de sistemas embarcados nos veículos, tornando-os mais complexos e difíceis de diagnosticar. Atualmente, as técnicas empregadas para o desenvolvimento da ferramenta de diagnóstico guiado são fortemente dependentes da experiência do projetista e centralizadas nas peças e subsistemas do veículo, possibilitando baixo grau de flexibilidade e reaproveitamento da informação. Este trabalho propõe uma nova metodologia para o desenvolvimento da ferramenta de diagnóstico guiado, aplicado a um estudo de caso da indústria automotiva, numa arquitetura de software em três camadas: peças e componentes do veículo, informações e estratégia para o diagnóstico e uma camada de apresentação. Isso permite grande flexibilidade no projeto da ferramenta de diagnóstico guiado para diferentes modelos de veículos, fabricantes de peças e sistemas automotivos. A metodologia proposta é aplicada em um estudo de caso de diagnóstico da Volvo caminhões, mostrando o processo de adaptação da arquitetura de software de três camadas à metodologia proposta e seu impacto no custo do desenvolvimento da ferramenta de diagnóstico.
External guided diagnostic tools are increasingly important to the aftermarket business of automotive industry. It occurs mainly due to the extensive using of embedded systems in vehicles, making them more complex and difficult to diagnose. Currently, the techniques used to develop a guided diagnostic tool are strongly dependent on designer’s experience and are usually focused on parts and vehicle’s subsystems, allowing low flexibility and reduced information reusage. This paper proposes a new methodology for development of a guided diagnostic tool applied to the automotive industry. This methodology is based on a three-tier software architecture composed of vehicle’s parts and components, diagnostic information and strategy, and presentation layer. It allows great flexibility for designing a guided diagnostic tool for different vehicle models, parts OEMs and automotive systems. The proposed methodology has been applied to a case study at Volvo Trucks. The corresponding adaptation process to the three-tier software architecture is presented as well as its impact on development costs.
5000
APA, Harvard, Vancouver, ISO, and other styles
11

Gruwell, Ammon Bradley. "High-Speed Programmable FPGA Configuration Memory Access Using JTAG." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6321.

Full text
Abstract:
Over the past couple of decades Field Programmable Gate Arrays (FPGAs) have become increasingly useful in a variety of domains. This is due to their low cost and flexibility compared to custom ASICs. This increasing interest in FPGAs has driven the need for tools that both qualify and improve the reliability of FPGAs for applications where the reconfigurability of FPGAs makes them vulnerable to radiation upsets such as in aerospace environments. Such tools ideally work with a wide variety of devices, are highly programmable but simple to use, and perform tasks at relatively high speeds. Of the various FPGA configuration interfaces available, the Joint Test Action Group (JTAG) standard for serial communication is the most universally compatible interface due to its use for verifying integrated circuits and testing printed circuit board connectivity. This universality makes it a good interface for tools seeking to access FPGA configuration memory. This thesis introduces a new tool architecture for high-speed, programmable JTAG access to FPGA configuration memory. This tool, called the JTAG Configuration Manager (JCM), is made up of a large C++ software library that runs on an embedded micro-processor coupled with a hardware JTAG controller module implemented in programmable logic. The JCM software library allows for the development of custom JTAG communication of any kind, although this thesis focuses on applications related to FPGA reliability. The JCM hardware controller module allows these software-generated JTAG sequences to be streamed out at very high speeds. Together the software and hardware provide the high-speed and programmability that is important for many JTAG applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Franco, João Miguel da Costa Sousa. "Automated Reliability Prediction and Analysis from Software Architectures." Doctoral thesis, 2016. http://hdl.handle.net/10316/29594.

Full text
Abstract:
Tese de doutoramento em Ciências e Tecnologias da Informação, apresentada ao Departamento de Engenharia Informática da Faculdade de Ciências e Tecnologias da Universidade de Coimbra
The quality of a software is determined by how it meets non-functional requirements such as performance, reliability, availability, maintainability and other ‘-ilities’. Depending on the application context, certain qualities are more critical to attain than others. As an example, a web-server processing large amounts of data should present qualities regarding to performance, while a software applied to a medical context must assure that no human life is at risk and as such, should comply to safety as a quality requirement. In a software engineering perspective, quality requirements should be assessed throughout the software development life-cycle. In an early stage, quality assessment supports design decisions and promotes analysis of possible alternatives. During the implementation or testing stages, project managers may confirm that the design meets the developed product and assure that it will conform to the stakeholder’s requirements. Regarding evolutionary stages, architects can also compare different designs and decide for the most suitable solution ta- king into account the desired quality attributes. During the development of a software system, neglecting the assessment of the quality requirements may lead, sooner or later, to the developed product failing to achieve one or more non-functional attributes desired by the stakeholder. Consequently, the development process returns to a previous phase for re-designing, re-implement and re-test a new solution to solve the problem. In short, it will involve more time, effort and money, causing more costs to the whole software project. Software architecture plays an important role in the achievement of non-functional attributes. Designers use architectures to codify non- functional properties and employ good design practices. In addition, it allows to maintain the traceability of the project through its lifespan and also serve as a form of communication between stakeholders, developers and maintainers. Software architectures can be considered one of the first documents in the project to structure the system, since allows to describe the development plans and specify rules, properties and architectural styles to attain specific quality attributes. For these reasons, the existent techniques to assess quality attributes use software architectures to obtain information about the system and provide accurate quantitative results. The problem addressed by this thesis resides in the fact that in today’s world most of the methods to assess quality attributes from a software architecture are still manually performed. To quantitatively assess an architecture’s quality attribute, designers have to build mathematical models through manual tasks and rebuild them for every change per- formed in the architecture. As any other manual task, building such models is error-prone, time consuming and can be almost unfeasible for large and complex systems. With this in mind, this thesis proposes to fill a gap in research by investigating towards a method that automatically assesses the reli- ability as a quality attribute from a software architecture. In particular, we exploit the formalisms of Architecture Description Languages (ADLs) to automatically generate mathematical models ex- pressing the reliability behavior of a system. Then, we extended the notion of ‘automated assessment’ to perform a thorough analysis to identify architectural weak points that are affecting the system. This analysis aims to provide information for architects about reliability improvements and suggest alternatives. With the goal of providing an assisting tool to aid architects in the design process, we implemented a plugin integrated in a ADL design tool. This plugin aims to make our automated approach available for architects to test and analyze their designs regarding reliability. In addition, we showed the different application contexts of our approach by including it in the reasoning process of self-adaptive systems. The results showed an improvement in the overall system quality when comparing to the traditional planning approaches. To conclude, we validated our method through a set of experiments that put into comparison our method with others that used manual approaches to assess reliability. In this work we pursue the motivation of contributing with a set of methods to give support for practitioners and researchers to avoid, prevent and detect undesired or unfeasible architectural designs. Moreover, we intend to promote the development of software with better quality and assure that it meets the desired quality requirements during the development process.
FCT - SFRH/BD/89702/2012
APA, Harvard, Vancouver, ISO, and other styles
13

Khan, Omer. "A hardware/software co-design architecture for thermal, power, and reliability management in chip multiprocessors." 2010. https://scholarworks.umass.edu/dissertations/AAI3397717.

Full text
Abstract:
Today’s designs are being shaped by the challenges of nano-CMOS technologies: increased power density, rising junction temperatures, and rising rate of errors and device failures that constrain average rate of power dissipation, and design technologies that limit peak power delivery. This thesis focuses on how to leverage the hardware and software abstraction layers of today’s systems. Several of the low level hardware details such as power, hotspots, and faults are tightly correlated to interactions within the system including application and hardware behavior. The conventional approach to tackling such problems come with additional costs and design complexity, and they are limited due to the strict abstraction layers of today’s systems. It is a well-known phenomenon that application programs exhibit repetitive and recognizable behavior during the course of their execution. Taking advantage of this time varying behavior at runtime can enable fine-grain optimizations. This thesis proposes a low overhead and scalable hardware based program phase classification scheme, termed as Instruction Type Vectors (ITV) which captures the execution frequencies of committed instruction types over profiling intervals and subsequently classifies and detects phases within threads. ITV reveals the computational demands by exposing the instruction type distribution of phases to the system. This thesis proposes several applications of ITV. Based on the past history of the rate of change of temperature, an in-time response to thermal emergencies within cores is proposed. ITV improves the accuracy of thread level temperature prediction, thus allowing the multi-core to operate at its optimal performance, while keeping the cores thermally saturated. To enable power management, a selective set of key hardware structures within cores are proposed to dynamically adapt to the computational demands of the application threads. ITV is proposed to speculatively trade off power and performance at the granularity of phases and based on this information, selectively power gate structures. Finally, ITV is used to map threads to cores when faults disable or degrade capability of cores. The system observes the phase behavior and initiates thread relocation to match the computation demands of threads to the capabilities of cores. This allows the system to exploit intercore redundancy for fault tolerance in a multi-core.
APA, Harvard, Vancouver, ISO, and other styles
14

"RESTful Service Composition." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-05-1077.

Full text
Abstract:
The Service-Oriented Architecture (SOA) has become one of the most popular approaches to building large-scale network applications. The web service technologies are de facto the default implementation for SOA. Simple Object Access Protocol (SOAP) is the key and fundamental technology of web services. Service composition is a way to deliver complex services based on existing partner services. Service orchestration with the support of Web Services Business Process Execution Language (WSBPEL) is the dominant approach of web service composition. WSBPEL-based service orchestration inherited the issue of interoperability from SOAP, and it was furthermore challenged for performance, scalability, reliability and modifiability. I present an architectural approach for service composition in this thesis to address these challenges. An architectural solution is so generic that it can be applied to a large spectrum of problems. I name the architectural style RESTful Service Composition (RSC), because many of its elements and constraints are derived from Representational State Transfer (REST). REST is an architectural style developed to describe the architectural style of the Web. The Web has demonstrated outstanding interoperability, performance, scalability, reliability and modifiability. RSC is designed for service composition on the Internet. The RSC style is composed on specific element types, including RESTful service composition client, RESTful partner proxy, composite resource, resource client, functional computation and relaying service. A service composition is partitioned into stages; each stage is represented as a computation that has a uniform identifier and a set of uniform access methods; and the transitions between stages are driven by computational batons. RSC is supplemented by a programming model that emphasizes on-demand function, map-reduce and continuation passing. An RSC-style composition does not depend on either a central conductor service or a common choreography specification, which makes it different from service orchestration or service choreography. Four scenarios are used to evaluate the performance, scalability, reliability and modifiability improvement of the RSC approach compared to orchestration. An RSC-style solution and an orchestration solution are compared side by side in every scenario. The first scenario evaluates the performance improvement of the X-Ray Diffraction (XRD) application in ScienceStudio; the second scenario evaluates the scalability improvement of the Process Variable (PV) snapshot application; the third scenario evaluates the reliability improvement of a notification application by simulation; and the fourth scenario evaluates the modifiability improvement of the XRD application in order to fulfil emerging requirements. The results show that the RSC approach outperforms the orchestration approach in every aspect.
APA, Harvard, Vancouver, ISO, and other styles
15

Yuan, Yi. "A microprocessor performance and reliability simulation framework using the speculative functional-first methodology." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4848.

Full text
Abstract:
With the high complexity of modern day microprocessors and the slow speed of cycle-accurate simulations, architects are often unable to adequately evaluate their designs during the architectural exploration phases of chip design. This thesis presents the design and implementation of the timing partition of the cycle-accurate, microarchitecture-level SFFSim-Bear simulator. SFFSim-Bear is an implementation of the speculative functional-first (SFF) methodology, and utilizes a hybrid software-FPGA platform to accelerate simulation throughput. The timing partition, implemented in FPGA, features throughput-oriented, latency-tolerant designs to cope with the challenges of the hybrid platform. Furthermore, a fault injection framework is added to this implementation that allows designers to study the reliability aspects of their processors. The result is a simulator that is fast, accurate, flexible, and extensible.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography