Dissertations / Theses on the topic 'Service replication'

To see the other types of publications on this topic, follow the link: Service replication.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Service replication.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bengtson, John, and Ola Jigin. "Increasing the availability of a service through Hot Passive Replication." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119526.

Full text
Abstract:
This bachelor thesis examines how redundancy is used to tolerate a process crash fault on a server in a system developed for emergency situations. The goal is to increase the availability of the service the system delivers. The redundant solution uses hot passive replication with one primary replica manager and one backup replica manager. With this approach, code for updating the backup, code for establishing a new primary and code to implement fault detection to detect a process crash has been written. After implementing the redundancy, the redundant solution has been evaluated. The first part of the evaluation showed that the redundant solution can deliver a service in case of a process crash on the primary replica manager. The second part of the evaluation showed that the average response time for an upload request and a download request had increased by 31\% compared to the non-redundant solution. The standard deviation was calculated for the response times and it showed that the response time of an upload request could be higher compared to the average response time. This large deviation has been investigated and the conclusion was that the database insertion was the reason.
APA, Harvard, Vancouver, ISO, and other styles
2

Bonner, Michael L. "Accountability of School Psychology Practicum: A Procedural Replication." Cincinnati, Ohio : University of Cincinnati, 2001. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=ucin1006784236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

van, Wyk David. "The effects of micro data centres for multi-service access nodes on latency and services." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/61342.

Full text
Abstract:
Latency is becoming a significant factor in many Internet applications such as P2P sharing and online gaming. Coupled with the fact that an increasing number of people are using online services for backup and replication purposes and it is clear that congestion increases exponentially on the network. One of the ways in which the latency problem can be solved is to remove core network congestion or to limit it in such a way that it does not pose a problem. In South Africa, Telkom rolled out MSAN cabinets as part of their Fibre-to-the-curb (FTTC) upgrades. This created an unique opportunity to provide new services, like BaRaaS, by implementing micro data centres within the MSAN to reduce congestion on the core network. It is important to have background knowledge on what exactly latency is and what causes it on a network. It is also essential to have an understanding of how congestion (and thus latency) can be avoided on a network. The background literature covered helps to determine which tools are available to do this, as well as to highlight any possible gaps that exist for new congestion control mechanisms. A simulation study was performed to determine whether implementing micro data centres inside the MSAN will in fact reduce latency. Simulations must be done as realistically as possible to ensure that the results can be correlated to a real-world problem. Two different simulations were performed to model the behaviour of the network when backup and replication data is sent to the Internet and when it is sent to a local MSAN. In both models the core network throughput as well as the Round Trip Times (RTTs) from the client to the Internet and the MSAN cabinets, were recorded. The RTT results were then used to determine whether latency had been reduced. Once it was established that micro data centres will indeed help in reducing congestion and latency on the network, the design of a storage server, for inclusion inside the MSAN cabinet, was done. A cost benefit analysis was also performed to ensure that the project will be financially viable in the long term. The cost analysis took into account all the costs associated with the project and then expanded them over a certain period of time to determine initial expenses. Extra information was then taken into consideration to determine the possible income per year as well as extra expenditure. It was found that the inclusion of a micro data centre reduces latency on the core network due to the removal of large backup data traffic from the core network, which reduces congestion and improves latency. From the Cost Benefit Analysis (CBA) it was found that the BaRaaS service is viable from a subscription point of view. Finally, the relevant conclusions with regard to the effects of data centres in MSAN cabinets on latency and services were drawn.
Vertraagtyd word 'n belangrike faktor in baie Internet toepassings soos P2P-deel en aanlyn-speletjies. Gekoppel met die feit dat 'n toenemende getal mense internetdienste gebruik vir rugsteun en replisering, word opeenhoping in die datanetwerk eksponensieel verhoog. Een van die maniere waarop die vertraagtydsprobleem opgelos kan word, is om opeenhoping in die kern-datanetwerk te verwyder of om dit op so 'n manier te beperk dat dit nie 'n probleem veroorsaak nie. In Suid Afrika het Telkom MSAN-kaste uitgerol as deel van hulle "Fibre-to-the-Curb" (FTTC) opgraderings. Dit het 'n unieke geleentheid geskep om nuwe dienste te skep, soos BaRaaS, deur mikro-datasentrums in die MSAN-kas te implementeer om opeenhoping in die kernnetwerk te verminder. Dit is belangrik om agtergrondkennis te hê van presies wat vertraagtyd is en waardeur dit op die netwerk veroorsaak word. Dit is ook belangrik om 'n begrip te hê van hoe opeenhoping (en dus vertraagtyd) op die netwerk vermy kan word. Die agtergrondsliteratuur wat gedek is help om te bepaal watter instrumente beskikbaar is, asook om moontlikhede na vore te bring vir nuwe meganismes om opeenhoping te beheer. 'n Simulasiestudie is uitgevoer om vas te stel of die insluiting van datasentrums in die MSAN-kaste inderdaad 'n verskil sal maak aan die vertraagtyd in die datanetwerk. Twee simulasies is uitgevoer om die gedrag van die netwerk te modelleer wanneer rugsteun- en repliseringsdata na onderskeidelik die Internet en die plaaslike MSAN gestuur word. In altwee is die deurset van die kernnetwerk sowel as die sogenaamde Round Trip Times (RTTs) van die kliënt na die Internet en die MSAN-kaste aangeteken. Die RTTs-resultate sal gebruik word om te bepaal of vertraagtyd verminder is. Nadat dit bepaal is dat mikro-datasentrums wel die opeenhoping in die netwerk sal verminder, is die ontwerp van 'n stoorbediener gedoen, vir insluiting in die MSAN-kas. 'n Koste-ontleding neem alle koste wat met die projek verband hou in ag en versprei dit dan oor 'n bepaalde tydperk om die aanvanklike kostes te bepaal. Verdere inligting word voorts in ag geneem om die moontlike inkomste per jaar sowel as addisionele uitgawes te bepaal. Daar is bevind dat die insluiting van 'n mikro-datasentrum vertraagtyd verminder deur groot rugsteen-dataverkeer van die kernnetwerk af te verwyder. Die koste-ontleding het gewys dat uit 'n subskripsie-oogpunt, die BaRaaS diens lewensvatbaar is. Uiteindelik word relevante gevoltrekkings gemaak oor die effek van datasentrums in MSAN-kaste op vertraagtyd en dienste.
Dissertation (MEng)--University of Pretoria, 2017.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
4

Hensley, Lauren Elizabeth. "A Replication Comparing Two Teaching Approaches: Teaching Pre-service Teachers to Implement Evidence-Based Practices with Fidelity." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1468352869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leuschner, Rudolf. "The Impact of Product, Price, Promotion and Place/Logistics on Customer Satisfaction and Share of Business." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291203718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Clutterbuck, Peter. "Maximizing the Availability of Distributed Software Services." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16134/1/Peter_Clutterbuck_Thesis.pdf.

Full text
Abstract:
In a commercial Internet environment, the quality of service experienced by a user is critical to competitive advantage and business survivability. The availability and response time of a distributed software service are central components of the overall quality of service provided to users. Traditionally availability is a measure of service down time. Traditionally availability measures the probability that the service will be live and is expressed in terms of failure occurrence and repair or recovery time. Response time is a measure of the time taken from when the service request is made, to when service provision occurs for the user. Deteriorating response time is also a valuable indicator to denial of service attacks which continue to pose a significant threat to service availability. The concept of the service cluster is increasingly being deployed to improve service availability and response time. Cluster processor replication increases service availability. Cluster dispatching of service requests across the replicated cluster processors increases service scalability and therefore response time. This thesis commences with a review of the research and current technology in the area of distributed software service availability. The review aims to identify any deficiencies within that area and propose critical features that mitigate those deficiencies. The three critical features proposed are in relation to user wait time, cluster dispatching, and the trust-based filtering of service requests. The user wait time proposal is that the availability of a distributed service should reflect both liveness probability level and probabalistic user access time of the service. The cluster dispatching proposal is that dispatching processing overhead is a function of the number of Internet Protocol (IP) datagrams/Transport Control Protocol (TCP) segments that are received by the dispatcher in respect of each service request. Consequently the number of IP datagrams/TCP segments should be minimised ideally so that for each incoming service request there is one IP datagram/TCP segment. The trust-based filtering proposal is that the level of trust in respect of each service request should be identified by the service as this is critical in mitigating distributed denial of service attacks - and therefore maximising the availability of the service A conceptual availability model which supports the three critical features within an Internet clustered service environment is then described. The conceptual model proposes an expanded availability definition and then describes the realization of this definition via additional capabilities positioned within the Transport layer of the Internet communication environment. The additional capabilities of this model also facilitate the minimization of cluster dispatcher processing load and the identification by the cluster dispatcher of request trust level. The model is then implemented within the Linux kernel. The implementation involves the addition of several options to the existing TCP specification and also the addition of several functions to the existing Socket API. The implementation is subsequently evaluated in a dispatcher-based clustered service environment.
APA, Harvard, Vancouver, ISO, and other styles
7

Clutterbuck, Peter. "Maximizing the Availability of Distributed Software Services." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16134/.

Full text
Abstract:
In a commercial Internet environment, the quality of service experienced by a user is critical to competitive advantage and business survivability. The availability and response time of a distributed software service are central components of the overall quality of service provided to users. Traditionally availability is a measure of service down time. Traditionally availability measures the probability that the service will be live and is expressed in terms of failure occurrence and repair or recovery time. Response time is a measure of the time taken from when the service request is made, to when service provision occurs for the user. Deteriorating response time is also a valuable indicator to denial of service attacks which continue to pose a significant threat to service availability. The concept of the service cluster is increasingly being deployed to improve service availability and response time. Cluster processor replication increases service availability. Cluster dispatching of service requests across the replicated cluster processors increases service scalability and therefore response time. This thesis commences with a review of the research and current technology in the area of distributed software service availability. The review aims to identify any deficiencies within that area and propose critical features that mitigate those deficiencies. The three critical features proposed are in relation to user wait time, cluster dispatching, and the trust-based filtering of service requests. The user wait time proposal is that the availability of a distributed service should reflect both liveness probability level and probabalistic user access time of the service. The cluster dispatching proposal is that dispatching processing overhead is a function of the number of Internet Protocol (IP) datagrams/Transport Control Protocol (TCP) segments that are received by the dispatcher in respect of each service request. Consequently the number of IP datagrams/TCP segments should be minimised ideally so that for each incoming service request there is one IP datagram/TCP segment. The trust-based filtering proposal is that the level of trust in respect of each service request should be identified by the service as this is critical in mitigating distributed denial of service attacks - and therefore maximising the availability of the service A conceptual availability model which supports the three critical features within an Internet clustered service environment is then described. The conceptual model proposes an expanded availability definition and then describes the realization of this definition via additional capabilities positioned within the Transport layer of the Internet communication environment. The additional capabilities of this model also facilitate the minimization of cluster dispatcher processing load and the identification by the cluster dispatcher of request trust level. The model is then implemented within the Linux kernel. The implementation involves the addition of several options to the existing TCP specification and also the addition of several functions to the existing Socket API. The implementation is subsequently evaluated in a dispatcher-based clustered service environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Pachomov, Artiom. "Dinaminis kompiuterinių sistemų infrastruktūros atnaujinimo modelis, pagrįstas atviro kodo sprendimais." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140717_141652-89970.

Full text
Abstract:
Šiame darbe analizuojamos įmonės su užsistovėjusia bei pasenusia programine įranga dinaminis atnaujinimas utilizuojant naujos atviro kodo nemokamos įrangos galimybes. Formuojamas dinaminis modelis, kuriame pritaikomi nepertraukiamų paslaugų, vartotojų centralizuoto valdymo bei neprarandamų duomenų sprendimai. Taip pat pateikiama analizė, kaip atlikti paslaugų įrangos migravimą bei sukurti pagalbinę IT infrastruktūros dalį, optimizuojančia sistemų priežiūrą bei našumą.
This paper analyzes, dynamic systems software updates for institution with deprecated system infrastructure using free, open source based solutions using most of it possibilities. Dynamic model is formed, which includes identity management, high availability clustering, data replication and data integrity solutions. Also additional analysis is included for IT infrastructure usage optimization.
APA, Harvard, Vancouver, ISO, and other styles
9

Torbey, Takkouz Zeina. "Increasing data availability in mobile ad-hoc networks : A community-centric and resource-aware replication approach." Thesis, Lyon, INSA, 2012. http://www.theses.fr/2012ISAL0089/document.

Full text
Abstract:
Les réseaux ad hoc mobiles sont des réseaux qui se forment spontanément grâce à la présence de terminaux mobiles. Ces réseaux sans fil sont de faible capacité. Les nœuds se déplacent librement et de manière imprévisible et ils se déchargent très rapidement. En conséquence, un réseau MANET est très enclin à subir des partitionnements fréquents. Les applications déployées sur de tels réseaux, souffrent de problèmes de disponibilité des données induits par ces partitionnements. La réplication des données constitue un mécanisme prometteur pour pallier ce problème. Cependant, la mise en œuvre d’un tel mécanisme dans un environnement aussi contraint en ressources constitue un réel défi. L’objectif principal est donc de réaliser un mécanisme peu consommateur en ressources. Le second objectif de la réplication est de permettre le rééquilibrage de la charge induite par les requêtes de données. Le choix des données à répliquer ainsi que celui des nœuds optimaux pour le placement des futurs réplicas est donc crucial, spécialement dans le contexte du MANET. Dans cette thèse, nous proposons CReaM (Community-Centric and Resource-Aware Replication Model”) un modèle de réplication adapté à un réseau MANET. CReaM fonctionne en mode autonomique : les prises de décisions se basent sur des informations collectées dans le voisinage du nœud plutôt que sur des données globalement impliquant tous les nœuds, ce qui permet de réduire le trafic réseau lié à la réplication. Pour réduire l’usage des ressources induit par la réplication sur un nœud, les niveaux de consommation des ressources sont contrôlés par un moniteur. Toute consommation excédant un seuil prédéfini lié à cette ressource déclenche le processus de réplication. Pour permettre le choix de la donnée à répliquer, une classification multi critères a été proposée (rareté de la donnée, sémantique, niveau de demande); et un moteur d’inférence qui prend en compte l’état de consommation des ressources du nœud pour désigner la catégorie la plus adaptée pour choisir la donnée à répliquer. Pour permettre de placer les réplicas au plus près des nœuds intéressés, CReaM propose un mécanisme pour l’identification et le maintien à jour des centres d’intérêt des nœuds. Les utilisateurs intéressés par un même sujet constituent une communauté. Par ailleurs, chaque donnée à répliquer est estampillée par le ou les sujets au(x)quel(s) elle s’apparente. Un nœud désirant placer un réplica apparenté à un sujet choisira le nœud ayant la plus grande communauté sur ce sujet. Les résultats d’expérimentations confirment la capacité de CReaM à améliorer la disponibilité des données au même niveau que les solutions concurrentes, tout en réduisant la charge liée à la réplication. D’autre part, CReaM permet de respecter l’état de consommation des ressources sur les nœuds
A Mobile Ad-hoc Network is a self-configured infrastructure-less network. It consists of autonomous mobile nodes that communicate over bandwidth-constrained wireless links. Nodes in a MANET are free to move randomly and organize themselves arbitrarily. They can join/quit the network in an unpredictable way; such rapid and untimely disconnections may cause network partitioning. In such cases, the network faces multiple difficulties. One major problem is data availability. Data replication is a possible solution to increase data availability. However, implementing replication in MANET is not a trivial task due to two major issues: the resource-constrained environment and the dynamicity of the environment makes making replication decisions a very tough problem. In this thesis, we propose a fully decentralized replication model for MANETs. This model is called CReaM: “Community-Centric and Resource-Aware Replication Model”. It is designed to cause as little additional network traffic as possible. To preserve device resources, a monitoring mechanism are proposed. When the consumption of one resource exceeds a predefined threshold, replication is initiated with the goal of balancing the load caused by requests over other nodes. The data item to replicate is selected depending on the type of resource that triggered the replication process. The best data item to replicate in case of high CPU consumption is the one that can better alleviate the load of the node, i.e. a highly requested data item. Oppositely, in case of low battery, rare data items are to be replicated (a data item is considered as rare when it is tagged as a hot topic (a topic with a large community of interested users) but has not been disseminated yet to other nodes). To this end, we introduce a data item classification based on multiple criteria e.g., data rarity, level of demand, semantics of the content. To select the replica holder, we propose a lightweight solution to collect information about the interests of participating users. Users interested in the same topic form a so-called “community of interest”. Through a tags analysis, a data item is assigned to one or more communities of interest. Based on this framework of analysis of the social usage of the data, replicas are placed close to the centers of the communities of interest, i.e. on the nodes with the highest connectivity with the members of the community. The results of evaluating CReaM show that CReaM has positive effects on its main objectives. In particular, it imposes a dramatically lower overhead than that of traditional periodical replication systems (less than 50% on average), while it maintains the data availability at a level comparable to those of its adversaries
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Sylvester Sanjeev. "Developing service satisfaction strategies using catastrophe model a replication study in New Zealand : a thesis submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Business, 2003." Full thesis. Abstract, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Karamanolis, Christos. "Configurable highly available distributed services." Thesis, Imperial College London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Clegg, Matthew. "Kernel services for supporting hard real-time active replication /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9820858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Abouzamazem, Abdallah. "Efficient and scalable replication of services over wide-area networks." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/1879.

Full text
Abstract:
Service replication ensures reliability and availability, but accomplishing it requires solving the total-order problem of guaranteeing that all replicas receive service requests in the same order. The problem, however, cannot be solved for a specific combination of three factors, namely, when (i) the message transmission delays cannot be reliably bounded, as often the case over wide-area networks such as the Internet, (ii) replicas can fail, e.g., by crashing, the very events that have to be tolerated through replication, and finally (iii) the solution has to be deterministic as distributed algorithms generally are. Therefore, total-order protocols are developed by avoiding one or more of these three factors by resorting to realistic assumptions based on system contexts. Nevertheless, they tend to be complex in structure and impose time overhead with potentials to slow down the performance of replicated services themselves. This thesis work develops an efficient total-order protocol by leveraging the emergence of cluster computing. It assumes that a server replica is not a stand-alone computer but is a part of a cluster from which it can enlist the cooperation of some of its peers for solving the total-order problem locally. The local solution is then globalised with replicas spread over a wide-area network. This two-staged solution is highly scalable and is experimentally demonstrated to have a smaller performance overhead than a single-stage solution applied directly over a wide-area network. The local solution is derived from an existing, multi-coordinator protocol, Mencius, which is known to have the best performance. Through a careful analysis, the derivation modifies some aspects of Mencius for further performance improvements while retaining the best aspects.
APA, Harvard, Vancouver, ISO, and other styles
14

Domaschka, Jörg [Verfasser]. "A Comprehensive Approach to Transparent and Flexible Replication of Java Services and Applications / Jörg Domaschka." München : Verlag Dr. Hut, 2013. http://d-nb.info/1037291611/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jones, Lisa Mali. "Service Learning in Business Schools: What the H.E.L.P. Honduras Story Teaches About Building, Sustaining, and Replicating International Initiatives in Graduate Programs." BYU ScholarsArchive, 2001. https://scholarsarchive.byu.edu/etd/4838.

Full text
Abstract:
This document outlines the foundation and first year results of the H.E.L.P. Honduras organization, which was formed as a student-based, student-governed international outreach initiative at the Marriott School of Management at Brigham Young University. Specifically, in its first year the organization focused on providing microcredit and service relief to victims of Hurricane Mitch in Honduras.After studying the case of H.E.L.P. Honduras, readers should conclude that educators interested in sponsoring sustainable student-run service learning organizations at private universities must address three primary issues: the problem of student selection and turnover, the need for administrative and faculty endorsement, and the need for sustainable internally-generated funds.This document outlines how the H.E.L.P. organization has changed in the three years since its inception, and it provides tactical suggestions meant to guide all parties interested in replicating the H.E.L.P. model. It also contains suggestions on how the current teaching and implementation model could more closely match with the basic tenets of service learning.After reading the following information and reviewing related literature, readers should conclude that at private universities, such as Brigham Young University, students and faculty interested in managing student-based initiatives need to take more time to build support across their institution. They also need to improve the process of student selection, find sustainable sources of funds, and tightly ground their work in the basic tenets of service learning.
APA, Harvard, Vancouver, ISO, and other styles
16

Nguyen, Thi Mai Huong. "Une architecture orientée services pour la gestion de données dans les grilles informatiques." Châtenay-Malabry, Ecole centrale de Paris, 2008. http://www.theses.fr/2008ECAP1073.

Full text
Abstract:
Dans cette thèse, nous proposons une architecture appelée GRAVY qui supporte les interfaces de systèmes de fichiers dans lequel les transferts de données peuvent être explicitement découplés des calculs et donc gérés comme des tâches de calculs, i. E. Mise en file d’attente, ordonnancées, contrôlées. GRAVY fournit aux composants de niveau plus élevé un accès efficace sécurisé dans la gestion des données sur les grilles. Ensuite, nous nous concentrons sur la gestion des réplications afin d’améliorer l’accessibilité des données, la performance d’accès aux données, et la consommation de bande passante du système. Nous proposons une stratégie de réplication appelée MaxDAR basée sur le classement sélectif des fichiers, qui indique le degré d’importance de ce fichier en considérant la capacité limite de stockage. Dans notre approche, les décisions de réplication sont motivées par l’optimisation sur le niveau global de disponibilité des données du système en fonction du classement sélectif des fichiers tout en réduisant leurs coûts de stockage. Les résultats de simulation en OptorSim montrent que MaxDAR obtient une meilleure performance pour l’exécution de tâche et la consommation de stockage en comparaison avec d’autres stratégies mise dans OptorSim. Enfin, nous proposons une architecture orientée services basée sur la technologie WSRF qui permet d’accéder aux ressources de données de manière dynamique et d’exécuter des tâches de transfert de données d’une manière décentralisée sur les différents sites afin d’améliorer les performances.
APA, Harvard, Vancouver, ISO, and other styles
17

Ameling, Michael. "Systemunterstützung für den Abgleich von Geschäftsobjekten zwischen Anwendungsservern über WebServices." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-24568.

Full text
Abstract:
Geschäftsanwendungen wie Supply Chain Management (SCM) oder Customer Relationship Management (CRM) werden repliziert, um eine hohe Skalierbarkeit und schnellen lokalen Zugriff zu erreichen. Die Geschäftsobjekte als Datencontainer der Anwendungen müssen infolge von Änderungen synchronisiert werden. Diese Arbeit leistet einen Beitrag zur Effizienzsteigerung des Synchronisationsprozesses durch die Reduzierung der Anzahl der Synchronisationsnachrichten sowie der zu übertragenden Datenmenge - auch unter Berücksichtigung des entstehenden Mehraufwands durch zusätzliche Verarbeitungsprozesse
Business applications such as supply chain management (SCM) or customer relationship management (CRM) are replicated in order to reach high scalability and fast local access. The business objects representing the data containers have to be synchronized to stay consistent across the application servers. This thesis provides a contribution to reach more efficiency during the synchronization process by reducing the amount of synchronization messages and the amount of data to be transmitted. The additional effort due to further processing steps is taken into account via a proper cost model
APA, Harvard, Vancouver, ISO, and other styles
18

Pantzar, Mika. "A replicative perspective on evolutionary dynamics : the organizing process of the US economy elaborated through biological metaphor /." Helsinki : Työväen taloudellinen tutkimuslaitos, 1991. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=002957522&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Carvalho, Roberto Pires de. "Sistemas de arquivos paralelos: alternativas para a redução do gargalo no acesso ao sistema de arquivos." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-23052006-182520/.

Full text
Abstract:
Nos últimos anos, a evolução dos processadores e redes para computadores de baixo custo foi muito maior se comparada com o aumento do desempenho dos discos de armazenamento de dados. Com isso, muitas aplicações estão encontrando dificuldades em atingir o pleno uso dos processadores, pois estes têm de esperar até que os dados cheguem para serem utilizados. Uma forma popular para resolver esse tipo de empecílio é a adoção de sistemas de arquivos paralelos, que utilizam a velocidade da rede local, além dos recursos de cada máquina, para suprir a deficiência de desempenho no uso isolado de cada disco. Neste estudo, analisamos alguns sistemas de arquivos paralelos e distribuídos, detalhando aqueles mais interessantes e importantes. Por fim, mostramos que o uso de um sistema de arquivos paralelo pode ser mais eficiente e vantajoso que o uso de um sistema de arquivos usual, para apenas um cliente.
In the last years, the evolution of the data processing power and network transmission for low cost computers was much bigger if compared to the increase of the speed of getting the data stored in disks. Therefore, many applications are finding difficulties in reaching the full use of the processors, because they have to wait until the data arrive before using. A popular way to solve this problem is to use a parallel file system, which uses the local network speed to avoid the performance bottleneck found in an isolated disk. In this study, we analyze some parallel and distributed file systems, detailing the most interesting and important ones. Finally, we show the use of a parallel file system can be more efficient than the use of a usual local file system, for just one client.
APA, Harvard, Vancouver, ISO, and other styles
20

"Service replication strategy in service overlay networks." 2004. http://library.cuhk.edu.hk/record=b5892154.

Full text
Abstract:
Liu Yunkai.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 43-45).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Background --- p.4
Chapter 2.1 --- Notations --- p.4
Chapter 2.2 --- Service Overlay Network Architecture --- p.5
Chapter 2.3 --- The SON Cost Model --- p.5
Chapter 2.4 --- Bandwidth Provisioning Problem --- p.7
Chapter 2.5 --- Traffic Variation and QoS Violation Penalty --- p.8
Chapter 3 --- Service Replication Model --- p.12
Chapter 3.1 --- One-to-One Service Model --- p.13
Chapter 3.2 --- Service Delivery Tree Model --- p.16
Chapter 3.2.1 --- Problem Formulation --- p.17
Chapter 3.2.2 --- Distributed Evaluation of SDT --- p.20
Chapter 3.2.3 --- Approximation --- p.22
Chapter 4 --- Service Replication Algorithms --- p.24
Chapter 4.1 --- Centralized Service Replication Algorithm --- p.24
Chapter 4.1.1 --- Preprocessing Phase --- p.24
Chapter 4.1.2 --- Searching Phase --- p.26
Chapter 4.2 --- Distributed Service Replication Algorithm --- p.27
Chapter 4.3 --- Improved Distributed Algorithm --- p.28
Chapter 5 --- Performance Evaluations --- p.32
Chapter 5.1 --- Experiment 1: Algorithm Illustration --- p.32
Chapter 5.2 --- Experiment 2: Performance Comparison --- p.34
Chapter 5.3 --- Experiment 3: Scalability Analysis --- p.36
Chapter 5.3.1 --- Experiment 3A --- p.36
Chapter 5.3.2 --- Experiment 3B --- p.37
Chapter 5.3.3 --- Experiment 3C --- p.38
Chapter 5.4 --- Experiment 4: Multiple replications --- p.39
Chapter 6 --- Related Work --- p.41
Chapter 7 --- Conclusion --- p.42
Bibliography --- p.45
APA, Harvard, Vancouver, ISO, and other styles
21

Dobre, Dan. "Time-Efficient Asynchronous Service Replication." Phd thesis, 2010. https://tuprints.ulb.tu-darmstadt.de/2300/1/thesis.pdf.

Full text
Abstract:
Modern critical computer applications often require continuous and correct operation despite the failure of critical system components. In a distributed system, fault-tolerance can be achieved by creating multiple copies of the functionality and placing them at different processes. The core constitutes a distributed protocol run among the processes whose goal is to provide the end user with the illusion of sequentially accessing a single correct copy. Not surprisingly, the efficiency of the distributed protocol used has a severe impact on the application performance. This thesis investigates the cost associated with implementing fundamental abstractions constituting the core of service replication in asynchronous distributed systems, namely (a) consensus and (b) the read/write register. The main question addressed by this thesis is how efficient implementations of these abstractions can be. The focus of the thesis lies on time complexity (or latency) as the main effciency metric, expressed as the number of communication steps carried out by the algorithm before it terminates. Besides latency, important cost factors are the resilience of an algorithm (i.e. the fraction of failures tolerated) and its message complexity (the number of messages exchanged). Consensus is perhaps the most fundamental problem in distributed computing. In the consensus problem, processes propose values and unanimously agree on one of the proposed values. In a purely asynchronous system, in which there is no upper bound on message transmission delays, consensus is impossible if a single process may crash. In practice however, systems are not asynchronous. They are timely in the common case and exhibit asynchronous behavior only occasionally. This observation has led to the concept of unreliable failure detectors to capture the synchrony conditions sufficient to solve consensus. This thesis studies the consensus problem in asynchronous systems in which processes may fail by crashing, enriched with unreliable failure detectors. It determines how quickly consensus can be solved in the common case, characterized by stable executions in which all failures have reliably been detected, settling important questions about consensus time complexity. Besides consensus, the read/write register abstraction is essential to sharing information in distributed systems, also referred to as distributed storage for its importance as a building-block in practical distributed storage and le systems. We study fault-tolerant read/write register implementations in which the data shared by a set of clients is replicated on a set of storage base objects. We consider robust storage implementations characterized by (a) wait-freedom (i.e. the fact the read/write operations invoked by correct clients always return) and (b) strong consistency guarantees despite a threshold of object failures. We allow for the most general type of object failure, Byzantine, without assuming authenticated data to limit the adversary. In this model, we determine the worst-case time complexity of accessing such a robust storage, closing several fundamental complexity gaps.
APA, Harvard, Vancouver, ISO, and other styles
22

Dobre, Dan [Verfasser]. "Time-efficient asynchronous service replication / vorgelegt von Dan Dobre." 2010. http://d-nb.info/100756864X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ramasamy, Harigovind Venkatraj. "Parsimonious service replication for tolerating malicious attacks in asynchronous environments /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223699.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3913. Adviser: William H. Sanders. Includes bibliographical references (leaves 189-193) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
24

Fu, Chun-Pin, and 傅俊賓. "A Dynamical Maintenance Service for File Replication in Data Grids." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/07245355838278877865.

Full text
Abstract:
碩士
東海大學
資訊工程與科學系
95
As large amount of data that is produced by scientific experiments and simulations, data grid is a very important and useful technique to solve these kinds of problems. Data replication is a technique that many researchers discuss in past years in data grid. It can create many copies of file and stored in the appropriate location to shorten the time of getting the file. In this thesis, we propose a dynamical maintenance service of replication to maintain the data in grid environment. Replicas will be adjusted to the appropriate locations for using. Bandwidth Hierarchy based Replication (BHR) algorithm is a strategy to maintain replica dynamically. We point out a scenario that BHR algorithm will cause a mistake when it operates. That mistake will cause the performance of grid environment become worse. We propose the maintenance strategy called Dynamic Maintenance Service (DMS) which is aimed at overcoming the mistake proposed. The contributions of this thesis are that the data grid environment will be more efficiency by using DMS algorithm. And the experimental results show that the DMS algorithm is more useful and efficient than other replication strategies.
APA, Harvard, Vancouver, ISO, and other styles
25

LIN, PU-SHUAN, and 林圃玄. "A Nested Invocation Suppression Mechanism for Active Replication Fault-Tolerant SOAP-Based Web Service ( MTWS-RNI SM )." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/21546746920924546320.

Full text
Abstract:
碩士
國立海洋大學
資訊科學學系
91
SOAP is getting more popular in the domain of EC. It can takeadvantages of XML and HTTP protocol into itself. Such as, firewall transparent, heterogeneous systems information exchange etc. But, theFault-Tolerant is not included. Active Fault-Tolerant Replication Group is such a way to increasing this ability. In this thesis we provide a Multi-Thread Redundant Nested Inv-ocation Suppression Mechanism for protecting the SOAP-based Web Services by Active Fault-Tolerant Group. Active replication is a appr-oach to building highly available and reliable distributed software ap-plications. The redundant nested invocation(RNI) problem arises whenWeb Services in a replicated group issues nested invocations to otherWeb Services in response to a client invocation. An Automatic suppr-ession mechanism of RNI is always a desirable solution. Unfortunately, most modern operating systems support Multi-Thread software Implementation for higher CPU utilization. This increasing the difficulties of the RNI suppression mechanism implementation. Intuitively, to design a deterministic thread execution control mechani-sm is a possible approach. But, for the execution fairness of each thread in OS. The implementations of thread usually be moved closed to OS Kernel. This means, all modification of kernel threads would be more difficult and against the design principles of modern Operati-on Systems. Furthermore, the portability of Web Application is also destroyed. In this work, we propose a suppression mechanism for theMulti-Thread Redundant Nested Invocation of SOAP-Base Web Servi-ces protected by Active Fault-Tolerant Replication Group.
APA, Harvard, Vancouver, ISO, and other styles
26

Mohammed, Bashir, Babagana Modu, Kabiru M. Maiyama, Hassan Ugail, Irfan U. Awan, and Mariam Kiran. "Failure Analysis Modelling in an Infrastructure as a Service (Iaas) Environment." 2018. http://hdl.handle.net/10454/16743.

Full text
Abstract:
yes
Failure Prediction has long known to be a challenging problem. With the evolving trend of technology and growing complexity of high-performance cloud data centre infrastructure, focusing on failure becomes very vital particularly when designing systems for the next generation. The traditional runtime fault-tolerance (FT) techniques such as data replication and periodic check-pointing are not very effective to handle the current state of the art emerging computing systems. This has necessitated the urgent need for a robust system with an in-depth understanding of system and component failures as well as the ability to predict accurate potential future system failures. In this paper, we studied data in-production-faults recorded within a five years period from the National Energy Research Scientific computing centre (NERSC). Using the data collected from the Computer Failure Data Repository (CFDR), we developed an effective failure prediction model focusing on high-performance cloud data centre infrastructure. Using the Auto-Regressive Moving Average (ARMA), our model was able to predict potential future failures in the system. Our results also show a failure prediction accuracy of 95%, which is good.
APA, Harvard, Vancouver, ISO, and other styles
27

Marcello, Tobias. "Interferons Alpha and Lambda inhibit hepatits C virus replication with distinct signal transduction and gene regulation kinetics /." 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016714956&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Nayate, Amol Pramod. "Transparent replication." Thesis, 2006. http://hdl.handle.net/2152/3461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kapritsos, Emmanouil. "Replicating multithreaded services." Thesis, 2014. http://hdl.handle.net/2152/28366.

Full text
Abstract:
For the last 40 years, the systems community has invested a lot of effort in designing techniques for building fault tolerant distributed systems and services. This effort has produced a massive list of results: the literature describes how to design replication protocols that tolerate a wide range of failures (from simple crashes to malicious "Byzantine" failures) in a wide range of settings (e.g. synchronous or asynchronous communication, with or without stable storage), optimizing various metrics (e.g. number of messages, latency, throughput). These techniques have their roots in ideas, such as the abstraction of State Machine Replication and the Paxos protocol, that were conceived when computing was very different than it is today: computers had a single core; all processing was done using a single thread of control, handling requests sequentially; and a collection of 20 nodes was considered a large distributed system. In the last decade, however, computing has gone through some major paradigm shifts, with the advent of multicore architectures and large cloud infrastructures. This dissertation explains how these profound changes impact the practical usefulness of traditional fault tolerant techniques and proposes new ways to architect these solutions to fit the new paradigms.
text
APA, Harvard, Vancouver, ISO, and other styles
30

Mundt, Anja Pamela. "Induction of G2 cell cycle arrest in HIV-1 infected patients mediated by HIV-1 viral protein R and the regulation of HIV-1 replication in macrocphages mediated by viral protein R /." 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=013003330&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Shih, Zong-Yi, and 施宗毅. "Study of Fault Tolerant Mechanism of the Web Services with Replication Technology." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/35599384788488329900.

Full text
Abstract:
碩士
國立成功大學
資訊管理研究所
94
Web Services are a new developing technology, and the characteristic that can be assembled and distributed is thought it is an important component setting up the distributed system in the future by a lot of enterprises. The key advantage of using Web Services is the ability to create applications on the fly through the use of loosely coupled, reusable software components. Businesses can be released from the burden of complex, low, and expensive software integration and focus instead on the value of their offerings and mission critical tasks.  Web Services are becoming popular in Web applications, so its stability and reliability is more important. The fault tolerant mechanism of Web Services designed in this research can offer Web Services in the fault tolerant direction.  In fault tolerant ability, not considering it under the situation of Byzantine Faults, besides trouble of the host computer, can also solve the network partition problem. In efficiency, Quorum-base Protocol does well in response time, but Dynamic Voting algorithm is better than Quorum-base Protocols on fault tolerant ability.  This research sets up a fault tolerant mechanism applied to Web Services, show via the experimental data, and prove the mechanism designed in this research only increases the load slightly of the original system. This mechanism makes a service provider set up Web Services with fault tolerant ability, when the trouble of host computer or network partition problem, can assign other Web Services copy effectively.
APA, Harvard, Vancouver, ISO, and other styles
32

"Replicating Hybrid Solutions for Business Customers: A Proposed Framework for Service Infusion Success." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.20931.

Full text
Abstract:
abstract: Identifying factors associated with service infusion success has become an important issue in theory and practice, as manufacturers turn to services to advance performance. The goals of this dissertation are to identify the key factors associated with service infusion success and develop an integrative framework and associated research propositions to isolate the underlying determinants of successful hybrid solution strategies for business customers. This dissertation is comprised of two phases. The first phase taps into the experience and learning gained by executives from Fortune-100 manufacturing firms who are managing the transition from goods to hybrid offerings for their customers. A discovery-oriented, theory-in-use approach is adopted to glean insights concerning the factors that facilitate and hinder those service transition strategies. Twenty-eight interviews were conducted with key executives, transcripts were analyzed and key themes were identified with special attention directed to the particular capabilities that managers consider crucial for successful service-growth strategies. One such capability centers on the ability of a firm to successfully transfer newly-developed hybrid solutions from one customer engagement to another. Building on this foundation, phase two involves a case study that provides an in-depth examination of the hybrid offering replication process in a business-to-business firm attempting to replicate four strategic hybrid offerings. Emergent themes, based on 13 manager interviews, reveal factors that promote or impede successful hybrid offering transfer. Among the factors that underlie successful hybrid offering transfers across customer engagements are close customer relationships, a clear value proposition embraced by organizational numbers, an accurate forecast of market potential, and collaborative working relationships across units. The findings from the field studies provided a catalyst for a deeper examination of existing literature and formed the building blocks for the conceptual model and several key research propositions related to the successful transfer of hybrid offerings. The model isolates five sets of factors that influence the hybrid offering transfer process, including the characteristics of (1) the source project team, (2) the seeking project team, (3) the hybrid offering, (4) the relationship exchange, and (5) the customer. The conceptualization isolates the critical role that the customer assumes in service infusion strategy implementation.
Dissertation/Thesis
Ph.D. Business Administration 2013
APA, Harvard, Vancouver, ISO, and other styles
33

Van, Rensburg Maria Magrietha Janse. "Employee substance abuse in the SAPS : strengthening the collaborative working relationship between first line managers and police social workers by evaluating the Sober Workplace Programme for Managers." Thesis, 2018. http://hdl.handle.net/10500/26465.

Full text
Abstract:
An intoxicated police employee can never keep the community safe and secure, as mandated by law enforcement prescripts. However, limited attention is given to harmful or hazardous substance abuse or the binge drinking habits of police employees. Substance abuse being a ‘culture’ in law enforcement agencies and the maintenance of the blue wall of silence as a protective measure necessitates scientific research to explore how a collaborative working relationship between the occupational social worker and especially First Line Managers (FLMs) can contribute to addressing this phenomenon in a timeous manner. The researcher applied a quantitative research approach and utilised a switching replication quasi-experimental design to determine whether the collaborative working relationship between South African Police Service (SAPS) FLMs and Police Social Workers (PSWs) can be strengthened to the extent that they effectively and efficiently deal with the harmful or hazardous substance abuse or binge drinking habits of SAPS employees by exposing the FLMs to a social work intervention, namely the Sober Workplace Programme for Managers. The pre-, mid-, and posttest measurements are based on knowledge, attitude, and behaviour constructs to determine if the two hypotheses formulated were supported. The study, however, did not indicate that the Sober Workplace Programme for Managers strengthens the collaborative working relationship between the FLMs and PSWs to address the harmful or hazardous substance abuse or binge drinking habits of employees in the workplace. Alternative research and occupational social work strategies are recommended to establish if and how the Sober Workplace Programme for Managers can be implemented to strengthen the collaborative working relationship between the FLMs and PSWs to address the harmful or hazardous substance abuse or binge drinking habits of employees.
Social Work
Ph. D. (Social Work)
APA, Harvard, Vancouver, ISO, and other styles
34

Schulte, Bernd Kubicka Stefan. "Protein transduction domains fused to virus receptors improve cellular virus uptake and enhance oncolysis by tumor-specific replicating vectors /." 2005. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014977270&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography