Дисертації з теми "Replication of computing experiment"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Replication of computing experiment".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Pamplona, Rodrigo Christovam. "Data replication in mobile computing." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16448.
Повний текст джерелаJúnior, Lourenço Alves Pereira. "Planejamento de experimentos com várias replicações em paralelo em grades computacionais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18082010-112815/.
Повний текст джерелаThis master\'s thesis presents a study of Grid Computing and Distributed Simulations using the MRIP approach. From this study was possible to design and implement the prototype of a tool for Management of Experiments in Grid Environment, called Grid Experiments Manager - GEM, which is organized in a modular way and can be used as a program or be integrated with another piece of software, being expansible to varius middlewares of Computational Grids. With its implementation was also possible to evaluate the performance of sequencial simulations executed in clusters and a Computational testbed Grid, also being implemented a benchmark which allowed repeat the same workload at the systems in evaluation. A high gain turnaround of the executions was infered with those results. When compared Sequential and Cluster executions, the eficiency was about of 197% for thin time of execution and 239% for those bigger in execution; when compared Cluster and Grid executions, the eficiency was about of 98% and 105% for thin and bigger simulations, repectivelly
Wu, Huaigu 1975. "Adaptable stateful application server replication." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115903.
Повний текст джерелаIn this thesis we tackle the issue of replication of the application server tier from ground off and develop a unified solution that provides both fault-tolerance and scalability. We first describe a set of execution patterns that describe how requests are typically executed in multi-tier architectures. They consider the flow of execution across client tier, application server tier, and database tier. In particular, the execution patterns describe how requests are associated with transactions, the fundamental execution units at application server and database tiers. Having these execution patterns in mind, we provide a formal definition of what it means to provide a correct execution across all tiers, even in case failures occur and the application server tier is replicated. Informally, a replicated system is correct if it behaves exactly as a non-replicated that never fails. From there, we propose a set of replication algorithms for fault-tolerance that provide correctness for the execution patterns that we have identified The main principle is to let a primary AS replica to execute all client requests, and to propagate any state changes performed by a transaction to backup replicas at transaction commit time. The challenges occur as requests can be associated in different ways with transactions. Then, we extend our fault-tolerance solution and develop a unified solution that provides both fault-tolerance and load-balancing. In this extended solution, each application server replica is able to execute client requests as a primary and at the same time serves as backup for other replicas. The framework provides a transparent, truly distributed and lightweight load distribution mechanism which takes advantage of the fault-tolerance infrastructure. Our replication tool is implemented as a plug-in of JBoss application server and the performance is carefully evaluated, comparing with JBoss' own replication solutions. The evaluation shows that our protocols have very good performance and compare favorably with existing solutions.
Di, Maria Riccardo. "Elastic computing on Cloud resources for the CMS experiment." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8955/.
Повний текст джерелаYang, Daiyi. "Zoolander: Modeling and managing replication for predictability." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322595127.
Повний текст джерелаGilmore, Lance Edwin. "Experimentally Evaluating Statistical Patterns of Offending Typology For Burglary: A Replication Study." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5371.
Повний текст джерелаScott, Hanna E. T. "A Balance between Testing and Inspections : An Extended Experiment Replication on Code Verification." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1751.
Повний текст джерелаEn experiment-replikering där traditionell strukturell kod-testning jämförs med inspektionsmötesförberedelse användandes scenario-baserad kodläsning. Det ursprungliga experimentet utfördes av Per Runeson och Anneliese Andrews på Washington State University år 2003.
Gonçalves, André Miguel Augusto. "Estimating data divergence in cloud computing storage systems." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10852.
Повний текст джерелаMany internet services are provided through cloud computing infrastructures that are composed of multiple data centers. To provide high availability and low latency, data is replicated in machines in different data centers, which introduces the complexity of guaranteeing that clients view data consistently. Data stores often opt for a relaxed approach to replication, guaranteeing only eventual consistency, since it improves latency of operations. However, this may lead to replicas having different values for the same data. One solution to control the divergence of data in eventually consistent systems is the usage of metrics that measure how stale data is for a replica. In the past, several algorithms have been proposed to estimate the value of these metrics in a deterministic way. An alternative solution is to rely on probabilistic metrics that estimate divergence with a certain degree of certainty. This relaxes the need to contact all replicas while still providing a relatively accurate measurement. In this work we designed and implemented a solution to estimate the divergence of data in eventually consistent data stores, that scale to many replicas by allowing clientside caching. Measuring the divergence when there is a large number of clients calls for the development of new algorithms that provide probabilistic guarantees. Additionally, unlike previous works, we intend to focus on measuring the divergence relative to a state that can lead to the violation of application invariants.
Partially funded by project PTDC/EIA EIA/108963/2008 and by an ERC Starting Grant, Agreement Number 307732
Clay, Lenitra M. "Replication techniques for scalable content distribution in the internet." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8491.
Повний текст джерелаKaravakis, Edward. "A distributed analysis and monitoring framework for the compact Muon solenoid experiment and a pedestrian simulation." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4409.
Повний текст джерелаSoria-Rodriguez, Pedro. "Multicast-Based Interactive-Group Object-Replication For Fault Tolerance." Digital WPI, 1999. https://digitalcommons.wpi.edu/etd-theses/1069.
Повний текст джерелаSousa, Valter Balegas de. "Key-CRDT stores." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7802.
Повний текст джерелаThe Internet has opened opportunities to create world scale services. These systems require highavailability and fault tolerance, while preserving low latency. Replication is a widely adopted technique to provide these properties. Different replication techniques have been proposed through the years, but to support these properties for world scale services it is necessary to trade consistency for availability, fault-tolerance and low latency. In weak consistency models, it is necessary to deal with possible conflicts arising from concurrent updates. We propose the use of conflict free replicated data types (CRDTs) to address this issue. Cloud computing systems support world scale services, often relying on Key-Value stores for storing data. These systems partition and replicate data over multiple nodes, that can be geographically disperse over the network. For handling conflict, these systems either rely on solutions that lose updates (e.g. last-write-wins) or require application to handle concurrent updates. Additionally, these systems provide little support for transactions, a widely used abstraction for data access. In this dissertation, we present the design and implementation of SwiftCloud, a Key-CRDT store that extends a Key-Value store by incorporating CRDTs in the system’s data-model. The system provides automatic conflict resolution relying on properties of CRDTs. We also present a version of SwiftCloud that supports transactions. Unlike traditional transactional systems, transactions never abort due to write/write conflicts, as the system leverages CRDT properties to merge concurrent transactions. For implementing SwiftCloud, we have introduced a set of new techniques, including versioned CRDTs, composition of CRDTs and alternative serialization methods. The evaluation of the system, with both micro-benchmarks and the TPC-W benchmark, shows that SwiftCloud imposes little overhead over a key-value store. Allowing clients to access a datacenter close to them with SwiftCloud, can reduce latency without requiring any complex reconciliation mechanism. The experience of using SwiftCloud has shown that adapting an existing application to use SwiftCloud requires low effort.
Project PTDC/EIA-EIA/108963/2008
Weber, Deisi Luana Diel. "Sourcing decision: a behavioral perspective, a replication of david hall teses." Universidade do Vale do Rio dos Sinos, 2015. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5224.
Повний текст джерелаMade available in DSpace on 2016-05-02T17:58:26Z (GMT). No. of bitstreams: 1 Deisi Luana Diel Weber_.pdf: 569327 bytes, checksum: 355337b56ffb691e0e6bd0005f05fc4d (MD5) Previous issue date: 2015-12-10
UNISINOS - Universidade do Vale do Rio dos Sinos
This research presents an investigation about the decision-making process regarding Make or Buy, trying to understand which variables most influence this decision to insource some activities, to outsource others, or to better estimate a percentage to combine both. The dependent variable on our research is the behavioral decision-making process, measuring the influence received by cost, quality, and monitoring. Trying to understand if differences between these independent variables influence how managers make their decision in the context of insource or outsource production. In order to test this model empirically, an experiment research was conducted, on the basis of eight different scenarios, which simulate a purchasing decision situation ranging the variables costs, quality, and monitoring of suppliers between High and Low, to understand the relationship of these constructs with the decision-making process of Brazilian managers. It was performed with a sample of 211 students from the Production Engineer course at Universidade do Rio dos Sinos (Unisinos). The data was analyzed using statistical technique ANOVA. The results demonstrate that managers consider cost variation to decide about how much to internalize and how much to outsource. They change their choices when quality is higher in their suppliers than inside the company. They also evaluate manager capability to control costs over their suppliers and on their process inside the company. However, they do not change their sourcing decision due to supplier’s monitoring variation, neither when quality monitoring is considered. This issue was already addressed in Hall’s study (2012) conducted in the United States. Thus, we decided to replicate his in Brazil in order to check if in a different environment, with other economic, politic, social, and regulatory situation, the manager will change their decisions. Nevertheless, after comparing both studies, we realize that the same hypothesis was supported in both studies, what means that even in another context the same variables are considered to base managers sourcing decision.
Shu, Jiang. "An Experiment Management Component for the WBCSim Problem Solving Environment." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/36448.
Повний текст джерелаMaster of Science
Shu, Jiang. "Experiment Management for the Problem Solving Environment WBCSim." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28713.
Повний текст джерелаPh. D.
Yoshida, Sara J. M. "The replication of depressed, localized skull fractures, an experiment using Sus domesticus as a model for human forensic trauma." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0025/MQ51516.pdf.
Повний текст джерелаDiotalevi, Tommaso. "Investigation of petabyte-scale data transfer performances with PhEDEx for the CMS experiment." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9416/.
Повний текст джерелаKurt, Mehmet Can. "Fault-tolerant Programming Models and Computing Frameworks." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437390499.
Повний текст джерелаHowes, William A. "On-Orbit FPGA SEU Mitigation and Measurement Experiments on the Cibola Flight Experiment Satellite." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2474.
Повний текст джерелаLambert, Thomas. "On the Effect of Replication of Input Files on the Efficiency and the Robustness of a Set of Computations." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0656/document.
Повний текст джерелаThe increasing importance of High Performance Computing (HPC) and Big Data applications creates new issues in parallel computing. One of them is communication, the data transferred from a processor to another. Such data movements have an impact on computational time, inducing delays and increase of energy consumption. If replication, of either tasks or files, generates communication, it is also an important tool to improve resiliency and parallelism. In this thesis, we focus on the impact of the replication of input files on the overall amount of communication. For this purpose, we concentrate on two practical problems. The first one is parallel matrix multiplication. In this problem, the goal is to induce as few replications as possible in order to decrease the amount of communication. The second problem is the scheduling of the “Map” phase in the MapReduce framework. In this case, replication is an input of the problem and this time the goal is to use it in the best possible way. In addition to the replication issue, this thesis also considers the comparison between static and dynamic approaches for scheduling. For consistency, static approaches compute schedules before starting the computation while dynamic approaches compute the schedules during the computation itself. In this thesis we design hybrid strategies in order to take advantage of the pros of both. First, we relate communication-avoiding matrix multiplication with a square partitioning problem, where load-balancing is given as an input. In this problem, the goal is to split a square into zones (whose areas depend on the relative speed of resources) while minimizing the sum of their half-perimeters. We improve the existing results in the literature for this problem with two additional approximation algorithms. In addition we also propose an alternative model using a cube partitioning problem. We prove the NP-completeness of the associated decision problem and we design two approximations algorithms. Finally, we implement the algorithms for both problems in order to provide a comparison of the schedules for matrix multiplication. For this purpose, we rely on the StarPU library. Second, in the Map phase of MapReduce scheduling case, the input files are replicated and distributed among the processors. For this problem we propose two metrics. In the first one, we forbid non-local tasks (a task that is processed on a processor that does not own its input files) and under this constraint, we aim at minimizing the makespan. In the second problem, we allow non-local tasks and we aim at minimizing them while minimizing makespan. For the theoretical study, we focus on tasks with homogeneous computation times. First, we relate a greedy algorithm on the makespan metric with a “ball-into-bins” process, proving that this algorithm produces solutions with expected overhead (the difference between the number of tasks on the most loaded processor and the number of tasks in a perfect distribution) equal to O(mlogm) where m denotes the number of processors. Second, we relate this scheduling problem (with forbidden non-local tasks) to a problem of graph orientation and therefore prove, with the results from the literature, that there exists, with high probability, a near-perfect assignment (whose overhead is at most 1). In addition, there are polynomial-time optimal algorithms. For the communication metric case, we provide new algorithms based on a graph model close to matching problems in bipartite graphs. We prove that these algorithms are optimal for both communication and makespan metrics. Finally, we provide simulations based on traces from a MapReduce cluster to test our strategies with realistic settings and prove that the algorithms we propose perform very well in the case of low or medium variance of the computation times of the different tasks of a job
Soares, João Paulo da Conceição. "FEW phone file system." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2229.
Повний текст джерелаThe evolution of mobile phones has made these devices more than just simple mobile communication devices. Current mobile phones include such features as built-in digital cameras, the ability to play and record multimedia contents and also the possibility of playing games. Most of these devices have support for Java developed applications, as well as multiple wireless technologies (e.g. GSM/GPRS, UMTS, Bluetooth, and Wi-Fi). All these features have been made possible due to technological evolution that led to the improvement of computational power, storage capacity, and communication capabilities of these devices. This thesis presents a distributed data management system, based on optimistic replication,named FEW Phone File System. This system takes advantage of the storage capacity and wireless communication capabilities of current mobile phones, by allowing users to carry their personal data “in” their mobile phones, and to access it in any workstation, as if they were files in the local file system. The FEW Phone File System is based on a hybrid architecture that merges the client/server model with peer-to-peer replication, that relies on periodic reconciliation to maintain consistency between replicas. The system’s server side runs on the mobile phone, and the client on a workstation. The communication between the client and the server can be supported by one of multiple network technologies, allowing the FEW Phone File System to dynamically adapt to the available network connectivity. The presented system addresses the mobile phone’s storage and power limitations by allowing multimedia contents to be adapted to the device’s specifications, thus reducing the volume of data transferred to the mobile phone, allowing for more user’s data to be stored. The FEW Phone File System also integrates mechanisms that maintain information about the existence of other copies of the stored files (e.g. WWW), avoiding the transfer of those files from the mobile device whenever accessing those copies is advantageous. Due to the increasing number of on-line storage resources (e.g. CVS/SVN, Picasa), this approach allows for those resources to be used by the FEW Phone File System to obtain the stored copies of the user’s files.
Hirai, Tsuguhito. "Performance Modeling of Large-Scale Parallel-Distributed Processing for Cloud Environment." Kyoto University, 2018. http://hdl.handle.net/2433/232493.
Повний текст джерелаGomes, Diego da Silva. "JavaRMS : um sistema de gerência de dados para grades baseado num modelo par-a-par." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15533.
Повний текст джерелаLarge scale execution environments such as Grids emerged to meet high-performance computing demands. Like in other execution platforms, its users need to get input data to their applications and to store their results. Although the Grid term is a metaphor where computing resources are so easily accessible as those from the eletric grid, its data and resource management tools are not sufficiently mature to make this idea a reality. They usually target high-performance resources, where data reliability, availability and security is assured through human presence. It turns to be critical when scientific applications need to process huge amounts of data. This work presents JavaRMS, a Grid data management system. By using a peer-topeer model, it aggregates low capacity resources to reduce storage costs. Resource heterogeneity is dealt with the virtual node technique, where peers receive data proportionally to their provided storage space. It applies fragmentation to make feasible the usage of low capacity resources and to improve file transfer operations performance. Also, the system achieves data persistence and availability through replication. In order to decrease the impact of maintenance operations, JavaRMS deals with resource dinamicity and instability with a state model. The architecture also contains user management services and protects resources through a quota system. All operations are designed to be secure. Finally, it provides the necessary infrastructure for further deployment of search services and user interactive tools. Experiments with the JavaRMS prototype showed that using a peer-to-peer model for resource organization and data location results in good scalability. Also, the virtual node technique showed to be efficient to provide heterogeneity-aware data distribution. Tests with the main file transfer operation proved the model can significantly improve data-intensive applications performance.
Fiala, Jan. "DNA výpočty a jejich aplikace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-412902.
Повний текст джерелаSousa, FlÃvio Rubens de Carvalho. "RepliC: ReplicaÃÃo ElÃstica de Banco de Dados Multi-Inquilino em Nuvem com Qualidade de ServiÃo." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9121.
Повний текст джерелаFatores econÃmicos estÃo levando ao aumento das infraestruturas e instalaÃÃes de fornecimento de computaÃÃo como um serviÃo, conhecido como Cloud Computing ou ComputaÃÃo em Nuvem, onde empresas e indivÃduos podem alugar capacidade de computaÃÃo e armazenamento, em vez de fazerem grandes investimentos de capital necessÃrios para a construÃÃo e instalaÃÃo de equipamentos de computaÃÃo em larga escala. Na nuvem, o usuÃrio do serviÃo tem algumas garantias, tais como desempenho e disponibilidade. Essas garantias de qualidade de serviÃo (QoS) sÃo definidas entre o provedor do serviÃo e o usuÃrio e expressas por meio de um acordo de nÃvel de serviÃo (SLA). Este acordo consiste de contratos que especificam um nÃvel de qualidade que deve ser atendido e penalidades em caso de falha. Muitas empresas dependem de um SLA e estas esperam que os provedores de nuvem forneÃam SLAs baseados em caracterÃsticas de desempenho. Contudo, em geral, os provedores baseiam seus SLAs apenas na disponibilidade dos serviÃos oferecidos. Sistemas de gerenciamento de banco de dados (SGBDs) para computaÃÃo em nuvem devem tratar uma grande quantidade de aplicaÃÃes, tenants ou inquilinos. Abordagens multi-inquilino tÃm sido utilizadas para hospedar vÃrios inquilinos dentro de um Ãnico SGBD, favorecendo o compartilhamento eficaz de recursos, alÃm de gerenciar uma grande quantidade de inquilinos com padrÃes de carga de trabalho irregulares. Por outro lado, os provedores em nuvem devem reduzir os custos operacionais garantindo a qualidade. Neste contexto, uma caracterÃstica chave à a replicaÃÃo de banco de dados, que melhora a disponibilidade, desempenho e, consequentemente, a qualidade do serviÃo. TÃcnicas de replicaÃÃo de dados tÃm sido usadas para melhorar a disponibilidade, o desempenho e a escalabilidade em diversos ambientes. Contudo, a maior parte das estratÃgias de replicaÃÃo de banco de dados tÃm se concentrado em aspectos de escalabilidade e consistÃncia do sistema com um nÃmero estÃtico de rÃplicas. Aspectos relacionados à elasticidade para banco de dados multi-inquilino tÃm recebido pouca atenÃÃo. Estas questÃes sÃo importantes em ambientes em nuvem, pois os provedores precisam adicionar rÃplicas de acordo com a carga de trabalho para evitar violaÃÃo do SLA e eles precisam remover rÃplicas quando a carga de trabalho diminui, alÃm de consolidar os inquilinos. Visando solucionar este problema, este trabalho apresenta RepliC, uma abordagem para a replicaÃÃo de banco de dados em nuvem com foco na qualidade do serviÃo, elasticidade e utilizaÃÃo eficiente dos recursos por meio de tÃcnicas multi-inquilino. RepliC utiliza informaÃÃes dos SGBDs e do provedor para provisionar recursos de forma dinÃmica. Com o objetivo de avaliar RepliC, experimentos que medem a qualidade de serviÃo e elasticidade sÃo apresentados. Os resultados destes experimentos confirmam que RepliC garante a qualidade com uma pequena quantidade de violaÃÃo do SLA enquanto utiliza os recursos de forma eficiente.
Fatores econÃmicos estÃo levando ao aumento das infraestruturas e instalaÃÃes de fornecimento de computaÃÃo como um serviÃo, conhecido como Cloud Computing ou ComputaÃÃo em Nuvem, onde empresas e indivÃduos podem alugar capacidade de computaÃÃo e armazenamento, em vez de fazerem grandes investimentos de capital necessÃrios para a construÃÃo e instalaÃÃo de equipamentos de computaÃÃo em larga escala. Na nuvem, o usuÃrio do serviÃo tem algumas garantias, tais como desempenho e disponibilidade. Essas garantias de qualidade de serviÃo (QoS) sÃo definidas entre o provedor do serviÃo e o usuÃrio e expressas por meio de um acordo de nÃvel de serviÃo (SLA). Este acordo consiste de contratos que especificam um nÃvel de qualidade que deve ser atendido e penalidades em caso de falha. Muitas empresas dependem de um SLA e estas esperam que os provedores de nuvem forneÃam SLAs baseados em caracterÃsticas de desempenho. Contudo, em geral, os provedores baseiam seus SLAs apenas na disponibilidade dos serviÃos oferecidos. Sistemas de gerenciamento de banco de dados (SGBDs) para computaÃÃo em nuvem devem tratar uma grande quantidade de aplicaÃÃes, tenants ou inquilinos. Abordagens multi-inquilino tÃm sido utilizadas para hospedar vÃrios inquilinos dentro de um Ãnico SGBD, favorecendo o compartilhamento eficaz de recursos, alÃm de gerenciar uma grande quantidade de inquilinos com padrÃes de carga de trabalho irregulares. Por outro lado, os provedores em nuvem devem reduzir os custos operacionais garantindo a qualidade. Neste contexto, uma caracterÃstica chave à a replicaÃÃo de banco de dados, que melhora a disponibilidade, desempenho e, consequentemente, a qualidade do serviÃo. TÃcnicas de replicaÃÃo de dados tÃm sido usadas para melhorar a disponibilidade, o desempenho e a escalabilidade em diversos ambientes. Contudo, a maior parte das estratÃgias de replicaÃÃo de banco de dados tÃm se concentrado em aspectos de escalabilidade e consistÃncia do sistema com um nÃmero estÃtico de rÃplicas. Aspectos relacionados à elasticidade para banco de dados multi-inquilino tÃm recebido pouca atenÃÃo. Estas questÃes sÃo importantes em ambientes em nuvem, pois os provedores precisam adicionar rÃplicas de acordo com a carga de trabalho para evitar violaÃÃo do SLA e eles precisam remover rÃplicas quando a carga de trabalho diminui, alÃm de consolidar os inquilinos. Visando solucionar este problema, este trabalho apresenta RepliC, uma abordagem para a replicaÃÃo de banco de dados em nuvem com foco na qualidade do serviÃo, elasticidade e utilizaÃÃo eficiente dos recursos por meio de tÃcnicas multi-inquilino. RepliC utiliza informaÃÃes dos SGBDs e do provedor para provisionar recursos de forma dinÃmica. Com o objetivo de avaliar RepliC, experimentos que medem a qualidade de serviÃo e elasticidade sÃo apresentados. Os resultados destes experimentos confirmam que RepliC garante a qualidade com uma pequena quantidade de violaÃÃo do SLA enquanto utiliza os recursos de forma eficiente.
Sirvent, Pardell Raül. "GRID superscalar: a programming model for the Grid." Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6015.
Повний текст джерелаAixò té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.
Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.
En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix els
objectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.
Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.
Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors.
During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.
Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.
There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.
In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).
GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.
We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.
We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.
Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments.
Xavier, Rafael Silveira. "Replicadores computacionais: propriedades básicas e modelagens preliminares." Universidade Presbiteriana Mackenzie, 2010. http://tede.mackenzie.br/jspui/handle/tede/1391.
Повний текст джерелаFundo Mackenzie de Pesquisa
Molecular replication was introduced as a possible theory to explain the origin of life. Since their proposal they have been extensively studied from a biochemical perspective. This work proposes a taxonomy for the main properties of replicators that are important for building computational tools to solve complex problems as well as introduces two computational models for these entities in order to observe and analyze the behavior of these models in light of the natural properties of replicators introduced.
A replicação molecular foi introduzida como uma possível teoria para explicar a origem da vida. Desde sua proposição, ela vem sendo estudada extensivamente a partir de uma perspectiva bioquímica. Baseado na literatura de replicadores moleculares, esta dissertação organiza e propõe uma taxonomia para as principais propriedades dos replicadores que sejam interessantes para a construção de ferramentas computacionais voltadas para a solução de problemas complexos na engenharia e na computação, bem como introduz dois modelos computacionais baseado nessas entidades a fim de observar e analisar o comportamento destes modelos sob a luz das propriedades dos replicadores naturais discutidas nesta dissertação.
Marletto, Chiara. "Issues of control and causation in quantum information theory." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:dba641e6-feb3-44df-968f-1b9a6564e836.
Повний текст джерелаOberst, Oliver [Verfasser], and G. [Akademischer Betreuer] Quast. "Development of a Virtualized Computing Environment for LHC Analyses on Standard Batch Systems and Measurement of the Inclusive Jet Cross-Section with the CMS experiment at 7 TeV / Oliver Oberst. Betreuer: G. Quast." Karlsruhe : KIT-Bibliothek, 2011. http://d-nb.info/1014279895/34.
Повний текст джерелаAhmed-Nacer, Mehdi. "Méthodologie d'évaluation pour les types de données répliqués." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0039/document.
Повний текст джерелаTo provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
Tos, Uras. "Réplication de données dans les systèmes de gestion de données à grande échelle." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30066/document.
Повний текст джерелаIn recent years, growing popularity of large-scale applications, e.g. scientific experiments, Internet of things and social networking, led to generation of large volumes of data. The management of this data presents a significant challenge as the data is heterogeneous and distributed on a large scale. In traditional systems including distributed and parallel systems, peer-to-peer systems and grid systems, meeting objectives such as achieving acceptable performance while ensuring good availability of data are major challenges for service providers, especially when the data is distributed around the world. In this context, data replication, as a well-known technique, allows: (i) increased data availability, (ii) reduced data access costs, and (iii) improved fault-tolerance. However, replicating data on all nodes is an unrealistic solution as it generates significant bandwidth consumption in addition to exhausting limited storage space. Defining good replication strategies is a solution to these problems. The data replication strategies that have been proposed for the traditional systems mentioned above are intended to improve performance for the user. They are difficult to adapt to cloud systems. Indeed, cloud providers aim to generate a profit in addition to meeting tenant requirements. Meeting the performance expectations of the tenants without sacrificing the provider's profit, as well as managing resource elasticities with a pay-as-you-go pricing model, are the fundamentals of cloud systems. In this thesis, we propose a data replication strategy that satisfies the requirements of the tenant, such as performance, while guaranteeing the economic profit of the provider. Based on a cost model, we estimate the response time required to execute a distributed database query. Data replication is only considered if, for any query, the estimated response time exceeds a threshold previously set in the contract between the provider and the tenant. Then, the planned replication must also be economically beneficial to the provider. In this context, we propose an economic model that takes into account both the expenditures and the revenues of the provider during the execution of any particular database query. Once the data replication is decided to go through, a heuristic placement approach is used to find the placement for new replicas in order to reduce the access time. In addition, a dynamic adjustment of the number of replicas is adopted to allow elastic management of resources. Proposed strategy is validated in an experimental evaluation carried out in a simulation environment. Compared with another data replication strategy proposed in the cloud systems, the analysis of the obtained results shows that the two compared strategies respond to the performance objective for the tenant. Nevertheless, a replica of data is created, with our strategy, only if this replication is profitable for the provider
Miller, Jean Anne. "Naturalism & Objectivity: Methods and Meta-methods." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28329.
Повний текст джерелаPh. D.
Oliveira, Ricardo Ramos de. "Avaliação da portabilidade entre fornecedores de teste como serviço na computação em nuvem." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16072018-170853/.
Повний текст джерелаThe automation of software testing involves high costs in large-scale systems, since it requires complex test scenarios and extremely long execution times. Moreover, each of its steps demands computational resources and considerable time for running many test cases, which makes it a bottleneck for Information Technology (IT) companies. The benefits and opportunities offered by the combination of cloud computing and Testing as a Service (TaaS), considered a new business and service model, can reduce the execution time of tests in a cost-effective way and improve Return on Investment (ROI). However, the lock-in problem, i.e., the imprisonment of the user in the platform of a specific vendor or test service caused by the difficult migration from one TaaS provider to another limits the effective use of such new technologies and prevents the widespread adoption of TaaS. As studies conducted are neither rigorous, nor conclusive, and mainly due to the lack of empirical evidence, many issues must be investigated from the perspective of migration among TaaS providers. This research aims at reductions in the impact of the vendor lock-in problem on the automation process of testing in cloud computing, writing, configuration, execution and management of automated test results. The prototype of the Multi- TaaS approach was developed through a Java library as a proof of concept. The Multi-TaaS approach is an abstraction layer and its architecture enables the abstraction and flexibilization of the exchange of TaaS providers in a portable way, once the complexity of the software engineers implementation can be encapsulated. The two main advantages of Multi-TaaS are the decoupling of the automated test from the TaaS platform on which it will be executed and the abstraction of the communication and integration aspects among the proprietary REST APIs of the different TaaS providers. The approach also enables the summarization of automated test results independently of the underlying TaaS platform technologies. A comparative evaluation between Multi-TaaS and conventional migration approaches regarding the difficulty, efficiency, effectiveness and effort of migration among TaaS providers was conducted through controlled experiments.The results show the approach facilitates the exchange of test service, improves efficiency and reduces the effort and maintenance costs of migration among TaaS providers. The studies conducted in the controlled experiment are promising and can assist software engineers in decision-making regarding the risks associated with vendor lock-in in TaaS. The Multi-TaaS approach contributes mainly to the portability of automated tests in the cloud and summarization of their results. Finally, this research enables also the widespread adoption of the TaaS service model in cloud computing, consciously, in the future.
Karim, Yacin. "Vers une vérification expérimentale de la théorie de la relativité restreinte : réplication des expériences de Charles-Eugène Guye (1907-1921)." Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00839315.
Повний текст джерелаCarvalho, Roberto Pires de. "Sistemas de arquivos paralelos: alternativas para a redução do gargalo no acesso ao sistema de arquivos." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-23052006-182520/.
Повний текст джерелаIn the last years, the evolution of the data processing power and network transmission for low cost computers was much bigger if compared to the increase of the speed of getting the data stored in disks. Therefore, many applications are finding difficulties in reaching the full use of the processors, because they have to wait until the data arrive before using. A popular way to solve this problem is to use a parallel file system, which uses the local network speed to avoid the performance bottleneck found in an isolated disk. In this study, we analyze some parallel and distributed file systems, detailing the most interesting and important ones. Finally, we show the use of a parallel file system can be more efficient than the use of a usual local file system, for just one client.
Koelemeijer, Dorien. "The Design and Evaluation of Ambient Displays in a Hospital Environment." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23601.
Повний текст джерелаДиденко, Дмитрий Георгиевич. "Мультиагентная система дискретно-событийного имитационного моделирования OpenGPSS". Doctoral thesis, 2010. https://ela.kpi.ua/handle/123456789/1062.
Повний текст джерелаJane-Ferng, Chiu, and 邱展逢. "Process-Replication Technique in Distributed Computing Systems." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/58116771130746686457.
Повний текст джерела國立臺灣科技大學
工程技術研究所
82
The paper presents a process-replication protocol which aims at providing fault-tolerance as well as performance im- provement to applications such as long-running and real-time tasks. Identical delivering order of messages are enforced on all replicas of a troupe using multicasts for inter- and intra-troupe communication. Detailed design of the protocol is given in the paper. The protocol is self-contained in the sense that crashes in a troupe is handled internally without affecting the operation of other troupes. Crash-handling pro- cedure is simple and associated overhead during fail-free operation is small. The protocol takes advantages of the re- dundancy of processes to expedite the completion of a distri- buted task by speeding up the determination of message sequen- ces and transmission of outgoing data messages at the expense of small control messages. Simulation is carried out to show the performance improvement.
"Transaction replication in mobile environments." Chinese University of Hong Kong, 1996. http://library.cuhk.edu.hk/record=b5888779.
Повний текст джерелаThesis (M.Phil.)--Chinese University of Hong Kong, 1996.
Includes bibliographical references (leaves 99-102).
Abstract --- p.ii
Acknowledgements --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Limitations of the Mobile Computing Environments --- p.2
Chapter 1.2 --- Applications of Transaction Replication in Mobile Environments --- p.5
Chapter 1.3 --- Motivation for Transaction Replication in Mobile Environments --- p.5
Chapter 1.4 --- Major Simulation Results --- p.6
Chapter 1.5 --- Roadmap to the Thesis --- p.7
Chapter 2 --- Previous and Related Research --- p.8
Chapter 2.1 --- File Systems --- p.8
Chapter 2.1.1 --- Management of Replicated Files --- p.8
Chapter 2.1.2 --- Disconnected Operations --- p.10
Chapter 2.2 --- Database Management --- p.12
Chapter 2.2.1 --- Data Replication Schemes --- p.12
Chapter 2.2.2 --- Cache Invalidation and Query Processing --- p.15
Chapter 2.2.3 --- Transaction Management in Mobile Environments --- p.17
Chapter 3 --- System Model and Assumptions --- p.21
Chapter 3.1 --- System Architecture --- p.21
Chapter 3.2 --- Transaction and Data Model --- p.23
Chapter 3.3 --- One-copy Serializability --- p.25
Chapter 3.4 --- Assumptions --- p.27
Chapter 4 --- Transaction Replication in a Mobile Environment --- p.29
Chapter 4.1 --- Read-only Public Transactions --- p.30
Chapter 4.1.1 --- Data Broadcasting --- p.31
Chapter 4.1.2 --- Cache Update --- p.33
Chapter 4.1.3 --- Cache Miss --- p.36
Chapter 4.1.4 --- Execution of Read-only Public Transactions --- p.37
Chapter 4.2 --- R/W Public Transactions --- p.39
Chapter 4.3 --- Correctness Argument --- p.41
Chapter 4.3.1 --- Correctness Proof --- p.43
Chapter 4.4 --- Extension to Support Partition Failures --- p.47
Chapter 5 --- Design and Implementation of the Simulation --- p.49
Chapter 5.1 --- CSIM Language --- p.49
Chapter 5.2 --- Simulation Components --- p.50
Chapter 5.2.1 --- Fixed Network --- p.50
Chapter 5.2.2 --- Mobile Host --- p.50
Chapter 5.2.3 --- Wireless Channel --- p.51
Chapter 5.2.4 --- Database and Transactions --- p.52
Chapter 5.3 --- A Lock-based Scheme --- p.53
Chapter 5.4 --- Graphing ...........、 --- p.54
Chapter 6 --- Results and Analysis --- p.55
Chapter 6.1 --- Results Dissection --- p.55
Chapter 6.2 --- Performance of the Scheme --- p.56
Chapter 6.2.1 --- Parameters Setting --- p.56
Chapter 6.2.2 --- Experiments and Results --- p.59
Chapter 6.3 --- Comparison with the Lock-based Scheme --- p.78
Chapter 6.3.1 --- Parameters Setting --- p.79
Chapter 6.3.2 --- Experiments and Results --- p.80
Chapter 7 --- Conclusions and Future Work --- p.93
Chapter 7.1 --- Conclusions --- p.93
Chapter 7.2 --- Future Work --- p.94
Chapter A --- Implementation Details --- p.96
Bibliography --- p.99
徐敏原. "Iformation Criterion for the analysis of Factorial Experiment without Replication." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59916709629835899999.
Повний текст джерела"Mental contamination: a replication and extension of the "dirty kiss" experiment." Thesis, 2010. http://library.cuhk.edu.hk/record=b6075007.
Повний текст джерелаMental contamination, an important phenomenon in OCD, refers to a sense of dirtiness without any contact with objectively dirty contaminant. However, the concept of mental contamination has not been thoroughly researched and there is an impending need for a psychological model to explain the phenomenon.
Method: Participants were assessed on questionnaires after imagining a non-consensual kiss or betrayal.
Objectives: The overall goal of this study is to enhance our understanding about mental contamination. Based on an experimental paradigm developed by Fairbrother, Newth, and Rachman (2005), three experiments are designed. The first experiment aims at replicating the results of the original study in local Chinese women. The second experiment examines the relationship between contact and mental contamination. The third experiment investigates the presence of mental contamination in persons experiencing betrayal.
Results: In Experiment 1, with an imagined non-consensual kiss, feeling of dirtiness, urge to wash and negative emotions were reproduced. In Experiment 2, it illustrated that either kissing a physically dirty looking man or being kissed non-consensually would experience stronger feeling of dirtiness, urge to wash and negative emotions. The last experiment showed that an imagined betrayal, a form of psychological violation, also induced a feeling of dirtiness, washing urge and negative emotions as with an imagined non-consensual kiss.
Three different pools of adult female participants were recruited for each experiment. In Experiment 1, 72 participants were recruited and randomly assigned to either a consensual kiss or a non-consensual kiss condition. In Experiment 2, 122 participants were recruited and randomly assigned to one of the four conditions. In Experiment 3, a total of 64 participants were recruited and randomly assigned to either non-betrayal or betrayal condition.
Kwok, Pui Ling Amy.
Adviser: Patrick Leung.
Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 166-177).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract and appendixes 1-3, 5-8 also in Chinese.
Barsoum, Ayad Fekry. "Replication, Security, and Integrity of Outsourced Data in Cloud Computing Systems." Thesis, 2013. http://hdl.handle.net/10012/7348.
Повний текст джерелаLi, Szu-Yi, and 李思儀. "A Heuristic Data Replication Algorithm With Scalability Consideration In Cloud Computing Systems." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/07638429847841375827.
Повний текст джерела輔仁大學
資訊工程學系
100
In cloud computing systems, many applications have intensive disk data accesses. To continuously execute applications after data corruption, we propose a heuristic data replication algorithm with polynomial time for the cloud computing system. In the cloud computing system, the number of nodes is usually large. It is impossible that all the nodes have the same performance. This also means that the cloud computing system owns the node heterogeneity characteristic. In addition, the applications run in the cloud computing system also have different QoS requirements. Unlike the previous replication algorithms used in the cloud computing system, the proposed replication algorithm particularly considers the node heterogeneity and application QoS. It is also well-known that the number of nodes in the cloud computing system is usually large. The node scalability issue is also concerned in the heuristic replication algorithm. The algorithm adopts the node combination technique to deal with the node scalability issue. Finally, we perform simulation experiments to compare the proposed replication algorithm with previous replication algorithms. The simulation results show that the proposed algorithm has better performance in various metrics.
Yen, Chi Ming, and 嚴智民. "An Efficient Replication Algorithm for Cloud Computing Systems with QoS and Heterogeneous Considerations." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38918666997775786317.
Повний текст джерела輔仁大學
資訊工程學系
99
In cloud computing, there is a huge amount of data in storage devices. It is an important issue how to ensure the data continues to be available when failures occur in storage devices. Data replication is a widely used technique. This thesis presents an efficient data replication algorithm for cloud computing systems. The algorithm considers the QoS requirements of applications (e.g. response time) and the heterogeneous characteristics of devices (e.g. different transmission rates, data access time, storage capacities, etc). There are two stages in the proposed algorithm. In the pre-processing stage, data replication requests are modeled as a weighted bipartite graph for considering QoS requirements and device heterogeneities. In the post-processing stage, the weighted bipartite graph is extended as a flow graph. Using the flow graph, the data replication problem can be optimally solved by transforming this problem to the well-known minimum cost flow problem. Finally, this thesis performs simulation experiments to demonstrate the effectiveness of the proposed algorithm in the replication cost and recovery time.
Ko, Chun-Chuan, and 柯俊全. "Computing the Number of Schemata to Use in a Code-Schema Development Experiment." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/x26365.
Повний текст джерела中原大學
資訊工程研究所
92
One way to summarize the cognitive development theory of Piaget may be as follows. When humans learn or try to solve problems, cognitive structures (called schemas) may form as a result. The development of schemas constantly goes through two processes : assimilation and accommodation. Assimilation occurs when new experiences “match with” existing schemas. Accommodation occurs when new experiences are in conflict with existing schemas, and this essentially means that existing schemas would have to be adjusted and/or modified in order that the new experiences can be successfully “accommodated”. In order for schemas to be developed, we must be able to abstract from our experiences. Though different individuals may have different abilities for making abstractions, the complexity of the real environment also plays a part in this. In fact, the inherent complexity of the real environment can make it very difficult, if not impossible, for an individual to make abstractions. Mainly, this is due to the variety and amount of information that we receive in a fixed time interval. If the varieties are many and the amount is huge, then obviously it can be very difficult for an individual to assess the relatedness of all the given information. And if the individual has problems assessing the relatedness of all the given information, then surely the individual will not be able to abstract schemas from the given information. To help the learner improve his/her cognitive abilities for making abstractions, we constructed a computer-assisted learning system called CSD (for Code Schema Development). Compared with the real environment in which the complexity of the problems-to-solve is potentially huge and uncontrollable, CSD is an artificial problem-solving environment in which the problem complexity is controllable and can be made very small. The goal of CSD is to control the complexity of the assigned problems (all programming exercises) so that (1) the learner can successfully develop the intended code schemas, and (2) the learner’s cognitive abilities for developing code schemas can be improved. In order for the learner’s cognitive abilities for developing code schemas to be improved, we need to increase the problem complexity as much as possible. But on the other hand, the complexity of the problems should not go beyond the learner’s current cognitive abilities. The purpose of this research is to find an acceptable way of doing so. Basically, we propose two methods for adjusting, for each learner, the problem complexity of CSD. One is based on the learner’s performance, and is called the performance method. The other is based on the learner’s learning effort, and is called the LE method. Our preliminary findings suggest that the performance method seem to satisfy our criteria of stability. However, more research is needed in order to fully justify the acceptability of these two methods.
Plavec, Franjo. "Stream Computing on FPGAs." Thesis, 2010. http://hdl.handle.net/1807/24855.
Повний текст джерелаChang, Li-Chieh, and 張立傑. "A Dynamic Data Replication Mechanism Using Blocking Probability And Reference Queue for Achieving Load Balance in Cloud Computing Environment." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3un4p7.
Повний текст джерела朝陽科技大學
資訊與通訊系
102
With the exploded development of Internet, the applications of network services increase more and more. To provide convenient and quick network services, cloud providers need a more stable and high-capacity systems to provide services, a new kind of cloud computing system, which consists of a huge number of processors and memories, high-speed networks, and various applications, is proposed to users via Internet. Cloud computing is a powerful and high-capacity system, because it can satisfy various demands and share a number of resources for users. The cloud system also has the characters of scalability, efficiency, fault tolerant, and storage. Therefore, several Internet-related companies invest much money and time in cloud computing systems, such as Google, IBM Blue Cloud, and Amazon. To satisfy those demands of users, the techniques of virtualization and data replication can be used to reduce the cost of cloud computing services. Besides, the efficient data replication is often to apply to reduce the loading of nodes and improve reliability of cloud storage system. Therefore, this paper proposes a three phase access scheme of data replication to configure replicas for the cloud storage system, called dynamic data replication algorithms (DDRA). The first and second phases are Anti-Blocking Probability Selection phase (ABPS) and Reference Queue Balance phase (RQLB) phase, can provide users to access to a replica from the adapted service nodes according to the loads of nodes for achieve the initial load balance. Besides, for achieving better performance of access and load balance of nodes, the dynamic replication configuration is used in the final phase, the Update Strategy phase (US). According to reasons above, the proposed method can enhance the availability, access efficiency, and achieve the load balance in our hierarchical cloud computing environment.
Schürmann, Felix [Verfasser]. "Exploring liquid computing in a hardware adaptation : construction and operation of a neural network experiment / presented by Felix Schürmann." 2005. http://d-nb.info/975516639/34.
Повний текст джерела(11132985), Thamir Qadah. "High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures." Thesis, 2021.
Знайти повний текст джерелаMagradze, Erekle. "Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid." Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-0028-86DB-0.
Повний текст джерела