Dissertations / Theses on the topic 'Replication of computing experiment'

To see the other types of publications on this topic, follow the link: Replication of computing experiment.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Replication of computing experiment.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pamplona, Rodrigo Christovam. "Data replication in mobile computing." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16448.

Full text
Abstract:
With the advances of technology and the popularization of mobile devices, the need of researching and discussing subjects related to mobile devices has raised. One of the subjects that needs to be further analyzed is data replication. This study investigates data replication on mobile devices focusing on power consumption. It presents four different scenarios that propose, describe, apply and evaluate data replication mechanisms, with the purpose of finding the best scenario that presents less energy consumption. In order to make the experiments, Sun SPOT was chosen as a mobile device. This device is fully programmed in a java environment. A different software was created in each scenario in order to verify the performance of the mobile devices regarding energy saving. The results found did not meet the expectations. While trying to find the best scenario a hardware limitation was found. Although software can be easily changed to fix errors, hardware cannot be changed as easily. The implications for the hardware limitation found in this study prevented the results to be optimal. The results found also imply that new hardware should be used in further experimentation. As this study proved to be limited, it suggests that additional studies should be carried out applying the new version of the hardware used in this study.
APA, Harvard, Vancouver, ISO, and other styles
2

Júnior, Lourenço Alves Pereira. "Planejamento de experimentos com várias replicações em paralelo em grades computacionais." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18082010-112815/.

Full text
Abstract:
Este trabalho de mestrado apresenta um estudo de Grades Computacionais e Simulações Distribuídas sobre a técnica MRIP. A partir deste estudo foi possível propor e implementar o protótipo de uma ferramenta para Gerenciamento de Experimento em Ambiente de Grade, denominada Grid Experiments Manager - GEM, organizada de forma modular podendo ser usada como um programa ou integrada com outro software, podendo ser expansível para vários middlewares de Grades Computacionais. Com a implementação também foi possível avaliar o desempenho de simulações sequenciais com aquelas executadas em cluster e em uma Grade Computacional de teste, sendo construído um benchmark que possibilitou repetir a mesma carga de trabalho para os sistemas sobre avaliação. Com os testes foi possível verificar um ganho alto no tempo de execução, quando comparadas as execuções sequenciais e em cluster, obteve-se eficiência em torno de 197% para simulações com tempo de execução baixo e 239% para aquelas com tempo de execução maior; na comparação das execuções em cluster e em grade, obteve-se os valores para eficiência de 98% e 105%, para simulações pequenas e grandes, respectivamente
This master\'s thesis presents a study of Grid Computing and Distributed Simulations using the MRIP approach. From this study was possible to design and implement the prototype of a tool for Management of Experiments in Grid Environment, called Grid Experiments Manager - GEM, which is organized in a modular way and can be used as a program or be integrated with another piece of software, being expansible to varius middlewares of Computational Grids. With its implementation was also possible to evaluate the performance of sequencial simulations executed in clusters and a Computational testbed Grid, also being implemented a benchmark which allowed repeat the same workload at the systems in evaluation. A high gain turnaround of the executions was infered with those results. When compared Sequential and Cluster executions, the eficiency was about of 197% for thin time of execution and 239% for those bigger in execution; when compared Cluster and Grid executions, the eficiency was about of 98% and 105% for thin and bigger simulations, repectivelly
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Huaigu 1975. "Adaptable stateful application server replication." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115903.

Full text
Abstract:
In recent years, multi-tier architectures have become the standard computing environment for web- and enterprise applications. The application server tier is often the heart of the system embedding the business logic. Adaptability, in particular the capability to adjust to the load submitted to the system and to handle the failure of individual components, are of outmost importance in order to provide 7/24 access and high performance. Replication is a common means to achieve these reliability and scalability requirements. With replication, the application server tier consists of several server replicas. Thus, if one replica fails, others can take over. Furthermore, the load can be distributed across the available replicas. Although many replication solutions have been proposed so far, most of them have been either developed for fault-tolerance or for scalability. Furthermore, only few have considered that the application server tier is only one tier in a multi-tier architecture, that this tier maintains state, and that execution in this environment can follow complex patterns. Thus, existing solutions often do not provide correctness beyond some basic application scenarios.
In this thesis we tackle the issue of replication of the application server tier from ground off and develop a unified solution that provides both fault-tolerance and scalability. We first describe a set of execution patterns that describe how requests are typically executed in multi-tier architectures. They consider the flow of execution across client tier, application server tier, and database tier. In particular, the execution patterns describe how requests are associated with transactions, the fundamental execution units at application server and database tiers. Having these execution patterns in mind, we provide a formal definition of what it means to provide a correct execution across all tiers, even in case failures occur and the application server tier is replicated. Informally, a replicated system is correct if it behaves exactly as a non-replicated that never fails. From there, we propose a set of replication algorithms for fault-tolerance that provide correctness for the execution patterns that we have identified The main principle is to let a primary AS replica to execute all client requests, and to propagate any state changes performed by a transaction to backup replicas at transaction commit time. The challenges occur as requests can be associated in different ways with transactions. Then, we extend our fault-tolerance solution and develop a unified solution that provides both fault-tolerance and load-balancing. In this extended solution, each application server replica is able to execute client requests as a primary and at the same time serves as backup for other replicas. The framework provides a transparent, truly distributed and lightweight load distribution mechanism which takes advantage of the fault-tolerance infrastructure. Our replication tool is implemented as a plug-in of JBoss application server and the performance is carefully evaluated, comparing with JBoss' own replication solutions. The evaluation shows that our protocols have very good performance and compare favorably with existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
4

Di, Maria Riccardo. "Elastic computing on Cloud resources for the CMS experiment." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8955/.

Full text
Abstract:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Daiyi. "Zoolander: Modeling and managing replication for predictability." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322595127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gilmore, Lance Edwin. "Experimentally Evaluating Statistical Patterns of Offending Typology For Burglary: A Replication Study." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5371.

Full text
Abstract:
This study used a quasi-experiment in order to evaluate the effect the SPOT-burglary profile on burglary arrest rates. A single police agency split into three different districts was used for the quasi-experiment. The SPOT-burglary profile was implemented in one district, while leaving the other two as control groups. The differences between the districts were controlled for using a statistical analysis. Burglary arrest rates were collected each month for all three districts for a period of one year before the implementation, and for six months after the implementation. Results show that the district who received the SPOT-burglary profile raised their burglary arrest rates by almost 75% in only 6 months, even after controlling for all relevant variables. This shows that the experimental intervention, the burglary profile, had a significant effect on the intended outcome- burglary arrest rates. The results of this study suggest that the SPOT-burglary profile may be able to provide law enforcement agencies with another tool to help increase burglary arrest rates in the future.
APA, Harvard, Vancouver, ISO, and other styles
7

Scott, Hanna E. T. "A Balance between Testing and Inspections : An Extended Experiment Replication on Code Verification." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1751.

Full text
Abstract:
An experiment replication comparing the performance of traditional structural code testing with inspection meeting preparation using scenario based reading. Original experiment was conducted by Per Runeson and Anneliese Andrews in 2003 at Washington State University.
En experiment-replikering där traditionell strukturell kod-testning jämförs med inspektionsmötesförberedelse användandes scenario-baserad kodläsning. Det ursprungliga experimentet utfördes av Per Runeson och Anneliese Andrews på Washington State University år 2003.
APA, Harvard, Vancouver, ISO, and other styles
8

Gonçalves, André Miguel Augusto. "Estimating data divergence in cloud computing storage systems." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10852.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Many internet services are provided through cloud computing infrastructures that are composed of multiple data centers. To provide high availability and low latency, data is replicated in machines in different data centers, which introduces the complexity of guaranteeing that clients view data consistently. Data stores often opt for a relaxed approach to replication, guaranteeing only eventual consistency, since it improves latency of operations. However, this may lead to replicas having different values for the same data. One solution to control the divergence of data in eventually consistent systems is the usage of metrics that measure how stale data is for a replica. In the past, several algorithms have been proposed to estimate the value of these metrics in a deterministic way. An alternative solution is to rely on probabilistic metrics that estimate divergence with a certain degree of certainty. This relaxes the need to contact all replicas while still providing a relatively accurate measurement. In this work we designed and implemented a solution to estimate the divergence of data in eventually consistent data stores, that scale to many replicas by allowing clientside caching. Measuring the divergence when there is a large number of clients calls for the development of new algorithms that provide probabilistic guarantees. Additionally, unlike previous works, we intend to focus on measuring the divergence relative to a state that can lead to the violation of application invariants.
Partially funded by project PTDC/EIA EIA/108963/2008 and by an ERC Starting Grant, Agreement Number 307732
APA, Harvard, Vancouver, ISO, and other styles
9

Clay, Lenitra M. "Replication techniques for scalable content distribution in the internet." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karavakis, Edward. "A distributed analysis and monitoring framework for the compact Muon solenoid experiment and a pedestrian simulation." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4409.

Full text
Abstract:
The design of a parallel and distributed computing system is a very complicated task. It requires a detailed understanding of the design issues and of the theoretical and practical aspects of their solutions. Firstly, this thesis discusses in detail the major concepts and components required to make parallel and distributed computing a reality. A multithreaded and distributed framework capable of analysing the simulation data produced by a pedestrian simulation software was developed. Secondly, this thesis discusses the origins and fundamentals of Grid computing and the motivations for its use in High Energy Physics. Access to the data produced by the Large Hadron Collider (LHC) has to be provided for more than five thousand scientists all over the world. Users who run analysis jobs on the Grid do not necessarily have expertise in Grid computing. Simple, userfriendly and reliable monitoring of the analysis jobs is one of the key components of the operations of the distributed analysis; reliable monitoring is one of the crucial components of the Worldwide LHC Computing Grid for providing the functionality and performance that is required by the LHC experiments. The CMS Dashboard Task Monitoring and the CMS Dashboard Job Summary monitoring applications were developed to serve the needs of the CMS community.
APA, Harvard, Vancouver, ISO, and other styles
11

Soria-Rodriguez, Pedro. "Multicast-Based Interactive-Group Object-Replication For Fault Tolerance." Digital WPI, 1999. https://digitalcommons.wpi.edu/etd-theses/1069.

Full text
Abstract:
"Distributed systems are clusters of computers working together on one task. The sharing of information across different architectures, and the timely and efficient use of the network resources for communication among computers are some of the problems involved in the implementation of a distributed system. In the case of a low latency system, the network utilization and the responsiveness of the communication mechanism are even more critical. This thesis introduces a new approach for the distribution of messages to computers in the system, in which, the Common Object Request Broker Architecture (CORBA) is used in conjunction with IP multicast to implement a fault-tolerant, low latency distributed system. Fault tolerance is achieved by replication of the current state of the system across several hosts. An update of the current state is initiated by a client application that contacts one of the state object replicas. The new information needs to be distributed to all the members of the distributed system (the object replicas). This state update is accomplished by using a two-phase commit protocol, which is implemented using a binary tree structure along with IP multicast to reduce the amount of network utilization, distribute the computation load associated with state propagation, and to achieve faster communication among the members of the distributed system. The use of IP multicast enhances the speed of message distribution, while the two-phase commit protocol encapsulates IP multicast to produce a reliable multicast service that is suitable for fault tolerant, distributed low latency applications. The binary tree structure, finally, is essential for the load sharing of the state commit response collection processing. "
APA, Harvard, Vancouver, ISO, and other styles
12

Sousa, Valter Balegas de. "Key-CRDT stores." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/7802.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The Internet has opened opportunities to create world scale services. These systems require highavailability and fault tolerance, while preserving low latency. Replication is a widely adopted technique to provide these properties. Different replication techniques have been proposed through the years, but to support these properties for world scale services it is necessary to trade consistency for availability, fault-tolerance and low latency. In weak consistency models, it is necessary to deal with possible conflicts arising from concurrent updates. We propose the use of conflict free replicated data types (CRDTs) to address this issue. Cloud computing systems support world scale services, often relying on Key-Value stores for storing data. These systems partition and replicate data over multiple nodes, that can be geographically disperse over the network. For handling conflict, these systems either rely on solutions that lose updates (e.g. last-write-wins) or require application to handle concurrent updates. Additionally, these systems provide little support for transactions, a widely used abstraction for data access. In this dissertation, we present the design and implementation of SwiftCloud, a Key-CRDT store that extends a Key-Value store by incorporating CRDTs in the system’s data-model. The system provides automatic conflict resolution relying on properties of CRDTs. We also present a version of SwiftCloud that supports transactions. Unlike traditional transactional systems, transactions never abort due to write/write conflicts, as the system leverages CRDT properties to merge concurrent transactions. For implementing SwiftCloud, we have introduced a set of new techniques, including versioned CRDTs, composition of CRDTs and alternative serialization methods. The evaluation of the system, with both micro-benchmarks and the TPC-W benchmark, shows that SwiftCloud imposes little overhead over a key-value store. Allowing clients to access a datacenter close to them with SwiftCloud, can reduce latency without requiring any complex reconciliation mechanism. The experience of using SwiftCloud has shown that adapting an existing application to use SwiftCloud requires low effort.
Project PTDC/EIA-EIA/108963/2008
APA, Harvard, Vancouver, ISO, and other styles
13

Weber, Deisi Luana Diel. "Sourcing decision: a behavioral perspective, a replication of david hall teses." Universidade do Vale do Rio dos Sinos, 2015. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5224.

Full text
Abstract:
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-05-02T17:58:26Z No. of bitstreams: 1 Deisi Luana Diel Weber_.pdf: 569327 bytes, checksum: 355337b56ffb691e0e6bd0005f05fc4d (MD5)
Made available in DSpace on 2016-05-02T17:58:26Z (GMT). No. of bitstreams: 1 Deisi Luana Diel Weber_.pdf: 569327 bytes, checksum: 355337b56ffb691e0e6bd0005f05fc4d (MD5) Previous issue date: 2015-12-10
UNISINOS - Universidade do Vale do Rio dos Sinos
This research presents an investigation about the decision-making process regarding Make or Buy, trying to understand which variables most influence this decision to insource some activities, to outsource others, or to better estimate a percentage to combine both. The dependent variable on our research is the behavioral decision-making process, measuring the influence received by cost, quality, and monitoring. Trying to understand if differences between these independent variables influence how managers make their decision in the context of insource or outsource production. In order to test this model empirically, an experiment research was conducted, on the basis of eight different scenarios, which simulate a purchasing decision situation ranging the variables costs, quality, and monitoring of suppliers between High and Low, to understand the relationship of these constructs with the decision-making process of Brazilian managers. It was performed with a sample of 211 students from the Production Engineer course at Universidade do Rio dos Sinos (Unisinos). The data was analyzed using statistical technique ANOVA. The results demonstrate that managers consider cost variation to decide about how much to internalize and how much to outsource. They change their choices when quality is higher in their suppliers than inside the company. They also evaluate manager capability to control costs over their suppliers and on their process inside the company. However, they do not change their sourcing decision due to supplier’s monitoring variation, neither when quality monitoring is considered. This issue was already addressed in Hall’s study (2012) conducted in the United States. Thus, we decided to replicate his in Brazil in order to check if in a different environment, with other economic, politic, social, and regulatory situation, the manager will change their decisions. Nevertheless, after comparing both studies, we realize that the same hypothesis was supported in both studies, what means that even in another context the same variables are considered to base managers sourcing decision.
APA, Harvard, Vancouver, ISO, and other styles
14

Shu, Jiang. "An Experiment Management Component for the WBCSim Problem Solving Environment." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/36448.

Full text
Abstract:
This thesis describes a computing environment WBCSim and its experiment management component. WBCSim is a web-based simulation system used to increase the productivity of wood scientists conducting research on wood-based composite and material manufacturing processes. This experiment management component integrates a web-based graphical front end, server scripts, and a database management system to allow scientists to easily save, retrieve, and perform customized operations on experimental data. A detailed description of the system architecture and the experiment management component is presented, along with a typical scenario of usage.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Shu, Jiang. "Experiment Management for the Problem Solving Environment WBCSim." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28713.

Full text
Abstract:
A problem solving environment (PSE) is a computational system that provides a complete and convenient set of high level tools for solving problems from a specific domain. This thesis takes an in-depth look at the experiment management aspect of PSEs, which can be divided into three levels: 1) data management, 2) change management, and 3) execution management. At the data management level, anything related to an experiment (computer simulation) should be stored and documented. A database management system can be used to store the simulation runs for a PSE. Then various high level interfaces can be provided to allow users to save, retrieve, search, and compare these simulation runs. At the change management level, a scientist should only focus on how to solve a problem in the experiment domain. Aside from running experiments, a scientist may only consider how to define a new model, how to modify an existing model, and how to interpret an experiment result. By using XML to describe a simulation model and unify various implementation layers, changing an existing model in a PSE can be intuitive and fast. At the execution management level, how an experiment is executed is the main concern. By providing a computational steering capability, a scientist can pause, examine, and compare the intermediate results from a simulation. Contrasted with the traditional way of running a lengthy simulation to see the result at the end, computational steering can leverage the user's expert knowledge on the fly (during the simulation run) and provide new insights and new product design opportunities. This thesis illustrates these concepts and implementation by using WBCSim as an example. WBCSim is a PSE that increases the productivity of wood scientists conducting research on wood-based composite materials and manufacturing processes. It integrates Fortran 90 simulation codes with a Web based graphical front end, an optimization tool, and various visualization tools. The WBCSim project was begun in 1997 with support from United States Department of Agriculture, Department of Energy, and Virginia Tech. It has since been used by students in several wood science classes, by graduate students and faculty, and by researchers at several forest products companies. WBCSim also serves as a test bed for the design, construction, and evaluation of useful, production quality PSEs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Yoshida, Sara J. M. "The replication of depressed, localized skull fractures, an experiment using Sus domesticus as a model for human forensic trauma." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0025/MQ51516.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Diotalevi, Tommaso. "Investigation of petabyte-scale data transfer performances with PhEDEx for the CMS experiment." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9416/.

Full text
Abstract:
PhEDEx, the CMS transfer management system, during the first LHC Run has moved about 150 PB and currently it is moving about 2.5 PB of data per week over the Worldwide LHC Computing Grid (WLGC). It was designed to complete each transfer required by users at the expense of the waiting time necessary for its completion. For this reason, after several years of operations, data regarding transfer latencies has been collected and stored into log files containing useful analyzable informations. Then, starting from the analysis of several typical CMS transfer workflows, a categorization of such latencies has been made with a focus on the different factors that contribute to the transfer completion time. The analysis presented in this thesis will provide the necessary information for equipping PhEDEx in the future with a set of new tools in order to proactively identify and fix any latency issues. PhEDEx, il sistema di gestione dei trasferimenti di CMS, durante il primo Run di LHC ha trasferito all’incirca 150 PB ed attualmente trasferisce circa 2.5 PB di dati alla settimana attraverso la Worldwide LHC Computing Grid (WLCG). Questo sistema è stato progettato per completare ogni trasferimento richiesto dall’utente a spese del tempo necessario per il suo completamento. Dopo svariati anni di operazioni con tale strumento, sono stati raccolti dati relativi alle latenze di trasferimento ed immagazzinati in log files contenenti informazioni utili per l’analisi. A questo punto, partendo dall’analisi di una ampia mole di trasferimenti in CMS, è stata effettuata una suddivisione di queste latenze ponendo particolare attenzione nei confronti dei fattori che contribuiscono al tempo di completamento del trasferimento. L’analisi presentata in questa tesi permetterà di equipaggiare PhEDEx con un insieme di utili strumenti in modo tale da identificare proattivamente queste latenze e adottare le opportune tattiche per minimizzare l’impatto sugli utenti finali.
APA, Harvard, Vancouver, ISO, and other styles
18

Kurt, Mehmet Can. "Fault-tolerant Programming Models and Computing Frameworks." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437390499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Howes, William A. "On-Orbit FPGA SEU Mitigation and Measurement Experiments on the Cibola Flight Experiment Satellite." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2474.

Full text
Abstract:
This work presents on-orbit experiments conducted to validate SEU mitigation and detection techniques on FPGA devices and to measure SEU rates in FPGAs and SDRAM. These experiments were designed for the Cibola Flight Experiment Satellite (CFESat), which is an operational technology pathfinder satellite built around 9 Xilinx Virtex FPGAs and developed at Los Alamos National Laboratory. The on-orbit validation experiments described in this work have operated for over four thousand FPGA device days and have validated a variety of SEU mitigation and detection techniques including triple modular redundancy, duplication with compare, reduced precision redundancy, and SDRAM and FPGA block memory scrubbing. Regional SEU rates and the change in CFE's SEU rate over time show the measurable, expected effects of the South Atlantic Anomaly and the cycle of solar activity on CFE's SEU rates. The results of the on-orbit experiments developed for this work demonstrate that FPGA devices can be used to provide reliable, high-performance processing to space applications when proper SEU mitigation strategies are applied to the designs implemented on the FPGAs.
APA, Harvard, Vancouver, ISO, and other styles
20

Lambert, Thomas. "On the Effect of Replication of Input Files on the Efficiency and the Robustness of a Set of Computations." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0656/document.

Full text
Abstract:
Avec l’émergence du calcul haute-performance (HPC) et des applications Big Data, de nouvelles problématiques cruciales sont apparues. Parmi elles on trouve le problème du transfert de données, c’est-à-dire des communications entre machines, qui peut génerer des délais lors de gros calculs en plus d’avoir un impact sur la consommation énergétique. La réplication, que ce soit de tâches ou de fichiers, est un facteur qui accroît ces communications, tout en étant un outil quasi-indispensable pour améliorer le parallélisme du calcul et la résistance aux pannes. Dans cette thèse nous nous intéressons à la réplication de fichiers et à son impact sur les communications au travers de deux problèmes. Dans le premier, la multiplication de matrices en parallèle, le but est de limiter autant que possible ces réplications pour diminuer la quantité de données déplacées. Dans le second, l’ordonnancement de la phase « Map » de MapReduce, il existe une réplication initiale qu’il faut utiliser au mieux afin d’obtenir l’ordonnancement le plus rapide ou entraînant le moins de création de nouvelles copies. En plus de la réplication, nous nous intéressons aussi à la comparaison entre stratégies d’ordonnancement statiques (allocations faites en amont du calcul) et dynamiques (allocations faites pendant le calcul) sur ces deux problèmes avec pour objectif de créer des stratégies hybrides mélangeant les deux aspects. Pour le premier problème, le produit de matrices en parallèle, nous nous ramenons à un problème de partition de carré où l’équilibrage de charge est donné en entrée. Cet équilibrage donné, le but est de minimiser la somme des semi-paramètres des rectangles couvrant des zones ainsi créés. Ce problème a déjà été étudié par le passé et nous démontrons de nouveaux résultats. Nous proposons ainsi deux nouveaux algorithmes d’approximation, l’un fondé sur une stratégie récursive et l’autre sur l’usage d’une courbe fractale. Nous présentons également une modélisation alternative, fondée sur un problème similaire de partition de cube, dont nous prouvons la NP-complétude tout en fournissant deux algorithmes d’approximation. Pour finir, nous réalisons également une implémentation pratique du produit de matrices en utilisant nos stratégies d’allocation grâce à la librairie StarPU. Les résultats expérimentaux montrent une amélioration du temps de calcul ainsi qu’une diminution significative des transferts de données lorsqu’on utilise une stratégie statique d’allocation couplée à une technique de vol de tâches. Pour le second problème, l’ordonnancement de la phase « Map » de MapReduce, plusieurs copies des fichiers d’entrée sont distribuées parmi les processeurs disponibles. Le but ici est de faire en sorte que chaque tâche soit attribuée à un processeur possédant son fichier d’entrée tout en ayant le meilleur temps de calcul total. Une autre option étudiée est d’autoriser les tâches nonlocales (attribués à des processeurs ne possédant pas leurs fichiers d’entrée) mais d’en limiter le nombre. Dans cette thèse nous montrons premièrement qu’un algorithme glouton pour ce problème peut être modélisé par un processus de « balls-in-bins » avec choix, impliquant une surcharge (nombre de tâches supplémentaires par rapport à la moyenne) en O(mlogm) où m est le nombre de processeurs. Secondement, dans le cas où les tâches non-locales sont interdites, nous relions le problème à celui de l’orientation de graphes, ce qui permet d’obtenir des algorithmes optimaux et polynomiaux et l’existence d’une assignation presque parfaite avec forte probabilité. Dans le cas où les tâches non locales sont autorisées, nous proposons également des algorithmes polynomiaux et optimaux. Finalement, nous proposons un ensemble de simulations pour montrer l’efficacité de nos méthodes dans le cas de tâches faiblement hétérogènes
The increasing importance of High Performance Computing (HPC) and Big Data applications creates new issues in parallel computing. One of them is communication, the data transferred from a processor to another. Such data movements have an impact on computational time, inducing delays and increase of energy consumption. If replication, of either tasks or files, generates communication, it is also an important tool to improve resiliency and parallelism. In this thesis, we focus on the impact of the replication of input files on the overall amount of communication. For this purpose, we concentrate on two practical problems. The first one is parallel matrix multiplication. In this problem, the goal is to induce as few replications as possible in order to decrease the amount of communication. The second problem is the scheduling of the “Map” phase in the MapReduce framework. In this case, replication is an input of the problem and this time the goal is to use it in the best possible way. In addition to the replication issue, this thesis also considers the comparison between static and dynamic approaches for scheduling. For consistency, static approaches compute schedules before starting the computation while dynamic approaches compute the schedules during the computation itself. In this thesis we design hybrid strategies in order to take advantage of the pros of both. First, we relate communication-avoiding matrix multiplication with a square partitioning problem, where load-balancing is given as an input. In this problem, the goal is to split a square into zones (whose areas depend on the relative speed of resources) while minimizing the sum of their half-perimeters. We improve the existing results in the literature for this problem with two additional approximation algorithms. In addition we also propose an alternative model using a cube partitioning problem. We prove the NP-completeness of the associated decision problem and we design two approximations algorithms. Finally, we implement the algorithms for both problems in order to provide a comparison of the schedules for matrix multiplication. For this purpose, we rely on the StarPU library. Second, in the Map phase of MapReduce scheduling case, the input files are replicated and distributed among the processors. For this problem we propose two metrics. In the first one, we forbid non-local tasks (a task that is processed on a processor that does not own its input files) and under this constraint, we aim at minimizing the makespan. In the second problem, we allow non-local tasks and we aim at minimizing them while minimizing makespan. For the theoretical study, we focus on tasks with homogeneous computation times. First, we relate a greedy algorithm on the makespan metric with a “ball-into-bins” process, proving that this algorithm produces solutions with expected overhead (the difference between the number of tasks on the most loaded processor and the number of tasks in a perfect distribution) equal to O(mlogm) where m denotes the number of processors. Second, we relate this scheduling problem (with forbidden non-local tasks) to a problem of graph orientation and therefore prove, with the results from the literature, that there exists, with high probability, a near-perfect assignment (whose overhead is at most 1). In addition, there are polynomial-time optimal algorithms. For the communication metric case, we provide new algorithms based on a graph model close to matching problems in bipartite graphs. We prove that these algorithms are optimal for both communication and makespan metrics. Finally, we provide simulations based on traces from a MapReduce cluster to test our strategies with realistic settings and prove that the algorithms we propose perform very well in the case of low or medium variance of the computation times of the different tasks of a job
APA, Harvard, Vancouver, ISO, and other styles
21

Soares, João Paulo da Conceição. "FEW phone file system." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2229.

Full text
Abstract:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
The evolution of mobile phones has made these devices more than just simple mobile communication devices. Current mobile phones include such features as built-in digital cameras, the ability to play and record multimedia contents and also the possibility of playing games. Most of these devices have support for Java developed applications, as well as multiple wireless technologies (e.g. GSM/GPRS, UMTS, Bluetooth, and Wi-Fi). All these features have been made possible due to technological evolution that led to the improvement of computational power, storage capacity, and communication capabilities of these devices. This thesis presents a distributed data management system, based on optimistic replication,named FEW Phone File System. This system takes advantage of the storage capacity and wireless communication capabilities of current mobile phones, by allowing users to carry their personal data “in” their mobile phones, and to access it in any workstation, as if they were files in the local file system. The FEW Phone File System is based on a hybrid architecture that merges the client/server model with peer-to-peer replication, that relies on periodic reconciliation to maintain consistency between replicas. The system’s server side runs on the mobile phone, and the client on a workstation. The communication between the client and the server can be supported by one of multiple network technologies, allowing the FEW Phone File System to dynamically adapt to the available network connectivity. The presented system addresses the mobile phone’s storage and power limitations by allowing multimedia contents to be adapted to the device’s specifications, thus reducing the volume of data transferred to the mobile phone, allowing for more user’s data to be stored. The FEW Phone File System also integrates mechanisms that maintain information about the existence of other copies of the stored files (e.g. WWW), avoiding the transfer of those files from the mobile device whenever accessing those copies is advantageous. Due to the increasing number of on-line storage resources (e.g. CVS/SVN, Picasa), this approach allows for those resources to be used by the FEW Phone File System to obtain the stored copies of the user’s files.
APA, Harvard, Vancouver, ISO, and other styles
22

Hirai, Tsuguhito. "Performance Modeling of Large-Scale Parallel-Distributed Processing for Cloud Environment." Kyoto University, 2018. http://hdl.handle.net/2433/232493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gomes, Diego da Silva. "JavaRMS : um sistema de gerência de dados para grades baseado num modelo par-a-par." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15533.

Full text
Abstract:
A grande demanda por computação de alto desempenho culminou na construção de ambientes de execução de larga escala como as Grades Computacionais. Não diferente de outras plataformas de execução, seus usuários precisam obter os dados de entrada para suas aplicações e muitas vezes precisam armazenar os resultados por elas gerados. Apesar de o termo Grade ter surgido de uma metáfora onde os recursos computacionais estão tão facilmente acessíveis como os da rede elétrica, as ferramentas para gerenciamento de dados e de recursos de armazenamento disponíveis estão muito aquém do necessário para concretizar essa idéia. A imaturidade desses serviços se torna crítica para aplicações científicas que necessitam processar grandes volumes de dados. Nesses casos, utiliza-se apenas os recursos de alto desempenho e assegura-se confiabilidade, disponibilidade e segurança para os dados através de presença humana. Este trabalho apresenta o JavaRMS, um sistema de gerência de dados para Grades. Ao empregar um modelo par-a-par, consegue-se agregar os recursos menos capacitados disponíveis no ambiente de Grade, diminuindo-se assim o custo da solução. O sistema utiliza a técnica de nodos virtuais para lidar com a grande heterogeneidade de recursos, distribuindo os dados de acordo com o espaço de armazenamento fornecido. Empregase fragmentação para viabilizar o uso dos recursos menos capacitados e para melhorar o desempenho das operações que envolvem a transferência de arquivos. Utiliza-se replicação para prover persistência aos dados e para melhorar sua disponibilidade. JavaRMS lida ainda com a dinamicidade e a instabilidade dos recursos através de um modelo de estados, de forma a diminuir o impacto das operações de manutenção. A arquitetura contempla também serviços para gerenciamento de usuários e protege os recursos contra fraudes através de um sistema de cotas. Todas as operações foram projetadas para serem seguras. Por fim, disponibiliza-se toda a infra-estrutura necessária para que serviços de busca e ferramentas de interação com o usuário sejam futuramente fornecidos. Os experimentos realizados com o protótipo do JavaRMS comprovam que usar um modelo par-a-par para organizar os recursos e localizar os dados resulta em boa escalabilidade. Já a técnica de nodos virtuais se mostrou eficiente para distribuir de forma balanceada os dados entre as máquinas, de acordo com a capacidade de armazenamento oferecida. Através de testes com a principal operação que envolve a transferência de arquivos, comprovou-se que o modelo é capaz de melhorar significativamente o desempenho de aplicações que necessitam processar grandes volumes de dados.
Large scale execution environments such as Grids emerged to meet high-performance computing demands. Like in other execution platforms, its users need to get input data to their applications and to store their results. Although the Grid term is a metaphor where computing resources are so easily accessible as those from the eletric grid, its data and resource management tools are not sufficiently mature to make this idea a reality. They usually target high-performance resources, where data reliability, availability and security is assured through human presence. It turns to be critical when scientific applications need to process huge amounts of data. This work presents JavaRMS, a Grid data management system. By using a peer-topeer model, it aggregates low capacity resources to reduce storage costs. Resource heterogeneity is dealt with the virtual node technique, where peers receive data proportionally to their provided storage space. It applies fragmentation to make feasible the usage of low capacity resources and to improve file transfer operations performance. Also, the system achieves data persistence and availability through replication. In order to decrease the impact of maintenance operations, JavaRMS deals with resource dinamicity and instability with a state model. The architecture also contains user management services and protects resources through a quota system. All operations are designed to be secure. Finally, it provides the necessary infrastructure for further deployment of search services and user interactive tools. Experiments with the JavaRMS prototype showed that using a peer-to-peer model for resource organization and data location results in good scalability. Also, the virtual node technique showed to be efficient to provide heterogeneity-aware data distribution. Tests with the main file transfer operation proved the model can significantly improve data-intensive applications performance.
APA, Harvard, Vancouver, ISO, and other styles
24

Fiala, Jan. "DNA výpočty a jejich aplikace." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-412902.

Full text
Abstract:
This thesis focuses on the design and implementation of an application involving the principles of DNA computing simulation for solving some selected problems. DNA computing represents an unconventional computing paradigm that is totally different from the concept of electronic computers. The main idea of DNA computing is to interpret the DNA as a medium for performing computation. Despite the fact, that DNA reactions are slower than operations performed on computers, they may provide some promising features in the future. The DNA operations are based on two important aspects: massive parallelism and principle of complementarity. There are many important problems for which there is no algorithm that would be able to solve the problem in a polynomial time using conventional computers. Therefore, the solutions of such problems are searched by exploring the entire state space. In this case the massive parallelism of the DNA operations becomes very important in order to reduce the complexity of finding a solution.
APA, Harvard, Vancouver, ISO, and other styles
25

Sousa, FlÃvio Rubens de Carvalho. "RepliC: ReplicaÃÃo ElÃstica de Banco de Dados Multi-Inquilino em Nuvem com Qualidade de ServiÃo." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9121.

Full text
Abstract:
nÃo hÃ
Fatores econÃmicos estÃo levando ao aumento das infraestruturas e instalaÃÃes de fornecimento de computaÃÃo como um serviÃo, conhecido como Cloud Computing ou ComputaÃÃo em Nuvem, onde empresas e indivÃduos podem alugar capacidade de computaÃÃo e armazenamento, em vez de fazerem grandes investimentos de capital necessÃrios para a construÃÃo e instalaÃÃo de equipamentos de computaÃÃo em larga escala. Na nuvem, o usuÃrio do serviÃo tem algumas garantias, tais como desempenho e disponibilidade. Essas garantias de qualidade de serviÃo (QoS) sÃo definidas entre o provedor do serviÃo e o usuÃrio e expressas por meio de um acordo de nÃvel de serviÃo (SLA). Este acordo consiste de contratos que especificam um nÃvel de qualidade que deve ser atendido e penalidades em caso de falha. Muitas empresas dependem de um SLA e estas esperam que os provedores de nuvem forneÃam SLAs baseados em caracterÃsticas de desempenho. Contudo, em geral, os provedores baseiam seus SLAs apenas na disponibilidade dos serviÃos oferecidos. Sistemas de gerenciamento de banco de dados (SGBDs) para computaÃÃo em nuvem devem tratar uma grande quantidade de aplicaÃÃes, tenants ou inquilinos. Abordagens multi-inquilino tÃm sido utilizadas para hospedar vÃrios inquilinos dentro de um Ãnico SGBD, favorecendo o compartilhamento eficaz de recursos, alÃm de gerenciar uma grande quantidade de inquilinos com padrÃes de carga de trabalho irregulares. Por outro lado, os provedores em nuvem devem reduzir os custos operacionais garantindo a qualidade. Neste contexto, uma caracterÃstica chave à a replicaÃÃo de banco de dados, que melhora a disponibilidade, desempenho e, consequentemente, a qualidade do serviÃo. TÃcnicas de replicaÃÃo de dados tÃm sido usadas para melhorar a disponibilidade, o desempenho e a escalabilidade em diversos ambientes. Contudo, a maior parte das estratÃgias de replicaÃÃo de banco de dados tÃm se concentrado em aspectos de escalabilidade e consistÃncia do sistema com um nÃmero estÃtico de rÃplicas. Aspectos relacionados à elasticidade para banco de dados multi-inquilino tÃm recebido pouca atenÃÃo. Estas questÃes sÃo importantes em ambientes em nuvem, pois os provedores precisam adicionar rÃplicas de acordo com a carga de trabalho para evitar violaÃÃo do SLA e eles precisam remover rÃplicas quando a carga de trabalho diminui, alÃm de consolidar os inquilinos. Visando solucionar este problema, este trabalho apresenta RepliC, uma abordagem para a replicaÃÃo de banco de dados em nuvem com foco na qualidade do serviÃo, elasticidade e utilizaÃÃo eficiente dos recursos por meio de tÃcnicas multi-inquilino. RepliC utiliza informaÃÃes dos SGBDs e do provedor para provisionar recursos de forma dinÃmica. Com o objetivo de avaliar RepliC, experimentos que medem a qualidade de serviÃo e elasticidade sÃo apresentados. Os resultados destes experimentos confirmam que RepliC garante a qualidade com uma pequena quantidade de violaÃÃo do SLA enquanto utiliza os recursos de forma eficiente.
Fatores econÃmicos estÃo levando ao aumento das infraestruturas e instalaÃÃes de fornecimento de computaÃÃo como um serviÃo, conhecido como Cloud Computing ou ComputaÃÃo em Nuvem, onde empresas e indivÃduos podem alugar capacidade de computaÃÃo e armazenamento, em vez de fazerem grandes investimentos de capital necessÃrios para a construÃÃo e instalaÃÃo de equipamentos de computaÃÃo em larga escala. Na nuvem, o usuÃrio do serviÃo tem algumas garantias, tais como desempenho e disponibilidade. Essas garantias de qualidade de serviÃo (QoS) sÃo definidas entre o provedor do serviÃo e o usuÃrio e expressas por meio de um acordo de nÃvel de serviÃo (SLA). Este acordo consiste de contratos que especificam um nÃvel de qualidade que deve ser atendido e penalidades em caso de falha. Muitas empresas dependem de um SLA e estas esperam que os provedores de nuvem forneÃam SLAs baseados em caracterÃsticas de desempenho. Contudo, em geral, os provedores baseiam seus SLAs apenas na disponibilidade dos serviÃos oferecidos. Sistemas de gerenciamento de banco de dados (SGBDs) para computaÃÃo em nuvem devem tratar uma grande quantidade de aplicaÃÃes, tenants ou inquilinos. Abordagens multi-inquilino tÃm sido utilizadas para hospedar vÃrios inquilinos dentro de um Ãnico SGBD, favorecendo o compartilhamento eficaz de recursos, alÃm de gerenciar uma grande quantidade de inquilinos com padrÃes de carga de trabalho irregulares. Por outro lado, os provedores em nuvem devem reduzir os custos operacionais garantindo a qualidade. Neste contexto, uma caracterÃstica chave à a replicaÃÃo de banco de dados, que melhora a disponibilidade, desempenho e, consequentemente, a qualidade do serviÃo. TÃcnicas de replicaÃÃo de dados tÃm sido usadas para melhorar a disponibilidade, o desempenho e a escalabilidade em diversos ambientes. Contudo, a maior parte das estratÃgias de replicaÃÃo de banco de dados tÃm se concentrado em aspectos de escalabilidade e consistÃncia do sistema com um nÃmero estÃtico de rÃplicas. Aspectos relacionados à elasticidade para banco de dados multi-inquilino tÃm recebido pouca atenÃÃo. Estas questÃes sÃo importantes em ambientes em nuvem, pois os provedores precisam adicionar rÃplicas de acordo com a carga de trabalho para evitar violaÃÃo do SLA e eles precisam remover rÃplicas quando a carga de trabalho diminui, alÃm de consolidar os inquilinos. Visando solucionar este problema, este trabalho apresenta RepliC, uma abordagem para a replicaÃÃo de banco de dados em nuvem com foco na qualidade do serviÃo, elasticidade e utilizaÃÃo eficiente dos recursos por meio de tÃcnicas multi-inquilino. RepliC utiliza informaÃÃes dos SGBDs e do provedor para provisionar recursos de forma dinÃmica. Com o objetivo de avaliar RepliC, experimentos que medem a qualidade de serviÃo e elasticidade sÃo apresentados. Os resultados destes experimentos confirmam que RepliC garante a qualidade com uma pequena quantidade de violaÃÃo do SLA enquanto utiliza os recursos de forma eficiente.
APA, Harvard, Vancouver, ISO, and other styles
26

Sirvent, Pardell Raül. "GRID superscalar: a programming model for the Grid." Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/6015.

Full text
Abstract:
Durant els darrers anys el Grid ha sorgit com una nova plataforma per la computació distribuïda. La tecnologia Gris permet unir diferents recursos de diferents dominis administratius i formar un superordinador virtual amb tots ells. Molts grups de recerca han dedicat els seus esforços a desenvolupar un conjunt de serveis bàsics per oferir un middleware de Grid: una capa que permet l'ús del Grid. De tota manera, utilitzar aquests serveis no és una tasca fácil per molts usuaris finals, cosa que empitjora si l'expertesa d'aquests usuaris no està relacionada amb la informàtica.
Això té una influència negativa a l'hora de que la comunitat científica adopti la tecnologia Grid. Es veu com una tecnologia potent però molt difícil de fer servir. Per facilitar l'ús del Grid és necessària una capa extra que amagui la complexitat d'aquest i permeti als usuaris programar o portar les seves aplicacions de manera senzilla.
Existeixen moltes propostes d'eines de programació pel Grid. En aquesta tesi fem un resum d'algunes d'elles, i podem veure que existeixen eines conscients i no-conscients del Grid (es programen especificant o no els detalls del Grid, respectivament). A més, molt poques d'aquestes eines poden explotar el paral·lelisme implícit de l'aplicació, i en la majoria d'elles, l'usuari ha de definir aquest paral·lelisme de manera explícita. Una altra característica que considerem important és si es basen en llenguatges de programació molt populars (com C++ o Java), cosa que facilita l'adopció per part dels usuaris finals.
En aquesta tesi, el nostre objectiu principal ha estat crear un model de programació pel Grid basat en la programació seqüencial i els llenguatges més coneguts de la programació imperativa, capaç d'explotar el paral·lelisme implícit de les aplicacions i d'accelerar-les fent servir els recursos del Grid de manera concurrent. A més, com el Grid és de naturalesa distribuïda, heterogènia i dinàmica i degut també a que el nombre de recursos que pot formar un Grid pot ser molt gran, la probabilitat de que es produeixi una errada durant l'execució d'una aplicació és elevada. Per tant, un altre dels nostres objectius ha estat tractar qualsevol tipus d'error que pugui sorgir durant l'execució d'una aplicació de manera automàtica (ja siguin errors relacionats amb l'aplicació o amb el Grid). GRID superscalar (GRIDSs), la principal contribució d'aquesta tesi, és un model de programació que assoleix els
objectius mencionats proporcionant una interfície molt petita i simple i un entorn d'execució que és capaç d'executar en paral·lel el codi proporcionat fent servir el Grid. La nostra interfície de programació permet a un usuari programar una aplicació no-conscient del Grid, amb llenguatges imperatius coneguts i populars (com C/C++, Java, Perl o Shell script) i de manera seqüencial, per tant dóna un pas important per ajudar als usuaris a adoptar la tecnologia Grid.
Hem aplicat el nostre coneixement de l'arquitectura de computadors i el disseny de microprocessadors a l'entorn d'execució de GRIDSs. Tal com es fa a un processador superescalar, l'entorn d'execució de GRIDSs és capaç de realitzar un anàlisi de dependències entre les tasques que formen l'aplicació, i d'aplicar tècniques de renombrament per incrementar el seu paral·lelisme. GRIDSs genera automàticament a partir del codi principal de l'usuari un graf que descriu les dependències de dades en l'aplicació. També presentem casos d'ús reals del model de programació en els camps de la química computacional i la bioinformàtica, que demostren que els nostres objectius han estat assolits.
Finalment, hem estudiat l'aplicació de diferents tècniques per detectar i tractar fallades: checkpoint, reintent i replicació de tasques. La nostra proposta és proporcionar un entorn capaç de tractar qualsevol tipus d'errors, de manera transparent a l'usuari sempre que sigui possible. El principal avantatge d'implementar aquests mecanismos al nivell del model de programació és que el coneixement a nivell de l'aplicació pot ser explotat per crear dinàmicament una estratègia de tolerància a fallades per cada aplicació, i evitar introduir sobrecàrrega en entorns lliures d'errors.
During last years, the Grid has emerged as a new platform for distributed computing. The Grid technology allows joining different resources from different administrative domains and forming a virtual supercomputer with all of them.
Many research groups have dedicated their efforts to develop a set of basic services to offer a Grid middleware: a layer that enables the use of the Grid. Anyway, using these services is not an easy task for many end users, even more if their expertise is not related to computer science. This has a negative influence in the adoption of the Grid technology by the scientific community. They see it as a powerful technology but very difficult to exploit. In order to ease the way the Grid must be used, there is a need for an extra layer which hides all the complexity of the Grid, and allows users to program or port their applications in an easy way.
There has been many proposals of programming tools for the Grid. In this thesis we give an overview on some of them, and we can see that there exist both Grid-aware and Grid-unaware environments (programmed with or without specifying details of the Grid respectively). Besides, very few existing tools can exploit the implicit parallelism of the application and in the majority of them, the user must define the parallelism explicitly. Another important feature we consider is if they are based in widely used programming languages (as C++ or Java), so the adoption is easier for end users.
In this thesis, our main objective has been to create a programming model for the Grid based on sequential programming and well-known imperative programming languages, able to exploit the implicit parallelism of applications and to speed them up by using the Grid resources concurrently. Moreover, because the Grid has a distributed, heterogeneous and dynamic nature and also because the number of resources that form a Grid can be very big, the probability that an error arises during an application's execution is big. Thus, another of our objectives has been to automatically deal with any type of errors which may arise during the execution of the application (application related or Grid related).
GRID superscalar (GRIDSs), the main contribution of this thesis, is a programming model that achieves these mentioned objectives by providing a very small and simple interface and a runtime that is able to execute in parallel the code provided using the Grid. Our programming interface allows a user to program a Grid-unaware application with already known and popular imperative languages (such as C/C++, Java, Perl or Shell script) and in a sequential fashion, therefore giving an important step to assist end users in the adoption of the Grid technology.
We have applied our knowledge from computer architecture and microprocessor design to the GRIDSs runtime. As it is done in a superscalar processor, the GRIDSs runtime system is able to perform a data dependence analysis between the tasks that form an application, and to apply renaming techniques in order to increase its parallelism. GRIDSs generates automatically from user's main code a graph describing the data dependencies in the application.
We present real use cases of the programming model in the fields of computational chemistry and bioinformatics, which demonstrate that our objectives have been achieved.
Finally, we have studied the application of several fault detection and treatment techniques: checkpointing, task retry and task replication. Our proposal is to provide an environment able to deal with all types of failures, transparently for the user whenever possible. The main advantage in implementing these mechanisms at the programming model level is that application-level knowledge can be exploited in order to dynamically create a fault tolerance strategy for each application, and avoiding to introduce overhead in error-free environments.
APA, Harvard, Vancouver, ISO, and other styles
27

Xavier, Rafael Silveira. "Replicadores computacionais: propriedades básicas e modelagens preliminares." Universidade Presbiteriana Mackenzie, 2010. http://tede.mackenzie.br/jspui/handle/tede/1391.

Full text
Abstract:
Made available in DSpace on 2016-03-15T19:37:30Z (GMT). No. of bitstreams: 1 Rafael Silveira Xavier.pdf: 2553779 bytes, checksum: a8f662532337688fc1c41c478b790a6a (MD5) Previous issue date: 2010-12-22
Fundo Mackenzie de Pesquisa
Molecular replication was introduced as a possible theory to explain the origin of life. Since their proposal they have been extensively studied from a biochemical perspective. This work proposes a taxonomy for the main properties of replicators that are important for building computational tools to solve complex problems as well as introduces two computational models for these entities in order to observe and analyze the behavior of these models in light of the natural properties of replicators introduced.
A replicação molecular foi introduzida como uma possível teoria para explicar a origem da vida. Desde sua proposição, ela vem sendo estudada extensivamente a partir de uma perspectiva bioquímica. Baseado na literatura de replicadores moleculares, esta dissertação organiza e propõe uma taxonomia para as principais propriedades dos replicadores que sejam interessantes para a construção de ferramentas computacionais voltadas para a solução de problemas complexos na engenharia e na computação, bem como introduz dois modelos computacionais baseado nessas entidades a fim de observar e analisar o comportamento destes modelos sob a luz das propriedades dos replicadores naturais discutidas nesta dissertação.
APA, Harvard, Vancouver, ISO, and other styles
28

Marletto, Chiara. "Issues of control and causation in quantum information theory." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:dba641e6-feb3-44df-968f-1b9a6564e836.

Full text
Abstract:
Issues of control and causation are central to the Quantum Theory of Computation. Yet there is no place for them in fundamental laws of Physics when expressed in the prevailing conception, i.e., in terms of initial conditions and laws of motion. This thesis aims at arguing that Constructor Theory, recently proposed by David Deutsch to generalise the quantum theory of computation, is a candidate to provide a theory of control and causation within Physics. To this end, I shall present a physical theory of information that is formulated solely in constructor-theoretic terms, i.e., in terms of which transformations of physical systems are possible and which are impossible. This theory solves the circularity at the foundations of existing information theory; it provides a unifying relation between classical and quantum information, revealing the single property underlying the most distinctive phenomena associated with the latter: the unpredictability of the outcomes of some deterministic processes, the lack of distinguishability of some states, the irreducible perturbation caused by measurement and the existence of locally inaccessible information in composite systems (entanglement). This thesis also aims to investigate the restrictions that quantum theory imposes on copying-like tasks. To this end, I will propose a unifying, picture-independent formulation of the no-cloning theorem. I will also discuss a protocol to accomplish the closely related task of transferring perfectly a quantum state along a spin chain, in the presence of systematic errors. Furthermore, I will address the problem of whether self-replication (as it occurs in living organisms) is compatible with Quantum Mechanics. Some physicists, notably Wigner, have argued that this logic is in fact forbidden by Quantum Mechanics, thus claiming that the latter is not a universal theory. I shall prove that those claims are invalid and that the logic of self-replication is, of course, compatible with Quantum Mechanics.
APA, Harvard, Vancouver, ISO, and other styles
29

Oberst, Oliver [Verfasser], and G. [Akademischer Betreuer] Quast. "Development of a Virtualized Computing Environment for LHC Analyses on Standard Batch Systems and Measurement of the Inclusive Jet Cross-Section with the CMS experiment at 7 TeV / Oliver Oberst. Betreuer: G. Quast." Karlsruhe : KIT-Bibliothek, 2011. http://d-nb.info/1014279895/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ahmed-Nacer, Mehdi. "Méthodologie d'évaluation pour les types de données répliqués." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0039/document.

Full text
Abstract:
Pour fournir une disponibilité permanente des données et réduire la latence réseau, les systèmes de partage de données se basent sur la réplication optimiste. Dans ce paradigme, il existe plusieurs copies de l'objet partagé dite répliques stockées sur des sites. Ces répliques peuvent être modifiées librement et à tout moment. Les modifications sont exécutées en local puis propagées aux autres sites pour y être appliquées. Les algorithmes de réplication optimiste sont chargés de gérer les modifications parallèles. L'objectif de cette thèse est de proposer une méthodologie d'évaluation pour les algorithmes de réplication optimiste. Le contexte de notre étude est l'édition collaborative. Nous allons concevoir pour cela un outil d'évaluation qui intègre un mécanisme de génération de corpus et un simulateur d'édition collaborative. À travers cet outil, nous allons dérouler plusieurs expériences sur deux types de corpus: synchrone et asynchrone. Dans le cas d'une édition collaborative synchrone, nous évaluerons les performances des différents algorithmes de réplication sur différents critères tels que le temps d'exécution, l'occupation mémoire, la taille des messages, etc. Nous proposerons ensuite quelques améliorations. En plus, dans le cas d'une édition collaborative asynchrone, lorsque deux répliques se synchronisent, les conflits sont plus nombreux à apparaître. Le système peut bloquer la fusion des modifications jusqu'à ce que l'utilisateur résolut les conflits. Pour réduire le nombre de ces conflits et l'effort des utilisateurs, nous proposerons une métrique d'évaluation et nous évaluerons les différents algorithmes sur cette métrique. Nous analyserons le résultat pour comprendre le comportement des utilisateurs et nous proposerons ensuite des algorithmes pour résoudre les conflits les plus important et réduire ainsi l'effort des développeurs. Enfin, nous proposerons une nouvelle architecture hybride basée sur deux types d'algorithmes de réplication. Contrairement aux architectures actuelles, l'architecture proposéeest simple, limite les ressources sur les dispositifs clients et ne nécessite pas de consensus entre les centres de données
To provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
APA, Harvard, Vancouver, ISO, and other styles
31

Tos, Uras. "Réplication de données dans les systèmes de gestion de données à grande échelle." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30066/document.

Full text
Abstract:
Ces dernières années, la popularité croissante des applications, e.g. les expériences scientifiques, Internet des objets et les réseaux sociaux, a conduit à la génération de gros volumes de données. La gestion de telles données qui de plus, sont hétérogènes et distribuées à grande échelle, constitue un défi important. Dans les systèmes traditionnels tels que les systèmes distribués et parallèles, les systèmes pair-à-pair et les systèmes de grille, répondre à des objectifs tels que l'obtention de performances acceptables tout en garantissant une bonne disponibilité de données constituent des objectifs majeurs pour l'utilisateur, en particulier lorsque ces données sont réparties à travers le monde. Dans ce contexte, la réplication de données, une technique très connue, permet notamment: (i) d'augmenter la disponibilité de données, (ii) de réduire les coûts d'accès aux données et (iii) d'assurer une meilleure tolérance aux pannes. Néanmoins, répliquer les données sur tous les nœuds est une solution non réaliste vu qu'elle génère une consommation importante de la bande passante en plus de l'espace limité de stockage. Définir des stratégies de réplication constitue la solution à apporter à ces problématiques. Les stratégies de réplication de données qui ont été proposées pour les systèmes traditionnels cités précédemment ont pour objectif l'amélioration des performances pour l'utilisateur. Elles sont difficiles à adapter dans les systèmes de cloud. En effet, le fournisseur de cloud a pour but de générer un profit en plus de répondre aux exigences des locataires. Satisfaire les attentes de ces locataire en matière de performances sans sacrifier le profit du fournisseur d'un coté et la gestion élastiques des ressources avec une tarification suivant le modèle 'pay-as-you-go' d'un autre coté, constituent des principes fondamentaux dans les systèmes cloud. Dans cette thèse, nous proposons une stratégie de réplication de données pour satisfaire les exigences du locataire, e.g. les performances, tout en garantissant le profit économique du fournisseur. En se basant sur un modèle de coût, nous estimons le temps de réponse nécessaire pour l'exécution d'une requête distribuée. La réplication de données n'est envisagée que si le temps de réponse estimé dépasse un seuil fixé auparavant dans le contrat établi entre le fournisseur et le client. Ensuite, cette réplication doit être profitable du point de vue économique pour le fournisseur. Dans ce contexte, nous proposons un modèle économique prenant en compte aussi bien les dépenses et les revenus du fournisseur lors de l'exécution de cette requête. Nous proposons une heuristique pour le placement des répliques afin de réduire les temps d'accès à ces nouvelles répliques. De plus, un ajustement du nombre de répliques est adopté afin de permettre une gestion élastique des ressources. Nous validons la stratégie proposée par une évaluation basée sur une simulation. Nous comparons les performances de notre stratégie à celles d'une autre stratégie de réplication proposée dans les clouds. L'analyse des résultats obtenus a montré que les deux stratégies comparées répondent à l'objectif de performances pour le locataire. Néanmoins, une réplique de données n'est crée, avec notre stratégie, que si cette réplication est profitable pour le fournisseur
In recent years, growing popularity of large-scale applications, e.g. scientific experiments, Internet of things and social networking, led to generation of large volumes of data. The management of this data presents a significant challenge as the data is heterogeneous and distributed on a large scale. In traditional systems including distributed and parallel systems, peer-to-peer systems and grid systems, meeting objectives such as achieving acceptable performance while ensuring good availability of data are major challenges for service providers, especially when the data is distributed around the world. In this context, data replication, as a well-known technique, allows: (i) increased data availability, (ii) reduced data access costs, and (iii) improved fault-tolerance. However, replicating data on all nodes is an unrealistic solution as it generates significant bandwidth consumption in addition to exhausting limited storage space. Defining good replication strategies is a solution to these problems. The data replication strategies that have been proposed for the traditional systems mentioned above are intended to improve performance for the user. They are difficult to adapt to cloud systems. Indeed, cloud providers aim to generate a profit in addition to meeting tenant requirements. Meeting the performance expectations of the tenants without sacrificing the provider's profit, as well as managing resource elasticities with a pay-as-you-go pricing model, are the fundamentals of cloud systems. In this thesis, we propose a data replication strategy that satisfies the requirements of the tenant, such as performance, while guaranteeing the economic profit of the provider. Based on a cost model, we estimate the response time required to execute a distributed database query. Data replication is only considered if, for any query, the estimated response time exceeds a threshold previously set in the contract between the provider and the tenant. Then, the planned replication must also be economically beneficial to the provider. In this context, we propose an economic model that takes into account both the expenditures and the revenues of the provider during the execution of any particular database query. Once the data replication is decided to go through, a heuristic placement approach is used to find the placement for new replicas in order to reduce the access time. In addition, a dynamic adjustment of the number of replicas is adopted to allow elastic management of resources. Proposed strategy is validated in an experimental evaluation carried out in a simulation environment. Compared with another data replication strategy proposed in the cloud systems, the analysis of the obtained results shows that the two compared strategies respond to the performance objective for the tenant. Nevertheless, a replica of data is created, with our strategy, only if this replication is profitable for the provider
APA, Harvard, Vancouver, ISO, and other styles
32

Miller, Jean Anne. "Naturalism & Objectivity: Methods and Meta-methods." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28329.

Full text
Abstract:
The error statistical account provides a basic account of evidence and inference. Formally, the approach is a re-interpretation of standard frequentist (Fisherian, Neyman-Pearson) statistics. Informally, it gives an account of inductive inference based on arguing from error, an analog of frequentist statistics, which keeps the concept of error probabilities central to the evaluation of inferences and evidence. Error statistical work at present tends to remain distinct from other approaches of naturalism and social epistemology in philosophy of science and, more generally, Science and Technology Studies (STS). My goal is to employ the error statistical program in order to address a number of problems to approaches in philosophy of science, which fall under two broad headings: (1) naturalistic philosophy of science and (2) social epistemology. The naturalistic approaches that I am interested in looking at seek to provide us with an account of scientific and meta-scientific methodologies that will avoid extreme skepticism, relativism and subjectivity and claim to teach us something about scientific inferences and evidence produced by experiments (broadly construed). I argue that these accounts fail to identify a satisfactory program for achieving those goals and; moreover, to the extent that they succeed it is by latching on to the more general principles and arguments from error statistics. In sum, I will apply the basic ideas from error statistics and use them to examine (and improve upon) an area to which they have not yet been applied, namely in assessing and pushing forward these interdisciplinary pursuits involving naturalistic philosophies of science that appeal to cognitive science, psychology, the scientific record and a variety of social epistemologies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
33

Oliveira, Ricardo Ramos de. "Avaliação da portabilidade entre fornecedores de teste como serviço na computação em nuvem." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-16072018-170853/.

Full text
Abstract:
O processo de automatização de teste de software possui alto custo envolvido em sistemas de larga escala, pois exigem cenários de teste complexos e tempos de execução extremamente longos. Além disso, cada etapa do processo de teste requer recursos computacionais e um tempo considerável para a execução de muitos casos de teste, tornando-se um gargalo para as empresas de Tecnologia da Informação (TI). Neste contexto, os benefícios e oportunidades oferecidos pela combinação da computação em nuvem com o Teste como Serviço (Testing as a Service, TaaS), que é considerado um novo modelo de negócio e de serviço atraente e promissor, podem proporcionar um impacto positivo na redução do tempo de execução dos testes de maneira custo-efetiva e aumentar o retorno sobre o investimento ou Return on investment (ROI). Todavia, existe o problema de vendor lock-in, que é o aprisionamento do usuário à plataforma de um fornecedor específico ou serviço de teste, ocasionado pela dificuldade de migrar de um fornecedor TaaS para outro, limitando a utilização dessas novas tecnologias de maneira efetiva e eficiente, impedindo assim, a ampla adoção do TaaS. Como os estudos existentes não são rigorosos ou conclusivos e, principalmente, devido à falta de evidência empírica na área de serviço de teste, muitas questões devem ser investigadas na perspectiva da migração entre os provedores de TaaS. O objetivo deste trabalho é reduzir o impacto ocasionado pelo problema de vendor lock-in no processo de automatização de testes na computação em nuvem, na escrita, configuração, execução e gerenciamento dos resultados de testes automatizados. Neste contexto, foi desenvolvido o protótipo da abordagem intitulada Multi-TaaS por meio de uma biblioteca Java como prova de conceito. A abordagem Multi-TaaS é uma camada de abstração e a sua arquitetura permite abstrair e flexibilizar a troca de fornecedores de TaaS de forma portável, pois permite encapsular toda a complexidade da implementação do engenheiro de software ao desacoplar o teste automatizado de qual plataforma TaaS ele será executado, bem como abstrair os aspectos da comunicação e integração entre as APIs REST proprietárias dos diferentes fornecedores de TaaS. Além disso, a abordagem Multi-TaaS possibilita também sumarizar os resultados dos testes automatizados de forma independente das tecnologias da plataforma TaaS subjacente. Foram realizadas avaliações comparativas da eficiência, efetividade, dificuldade e do esforço de migração entre as abordagens Multi-TaaS e abordagem convencional, por meio de experimentos controlados. Os resultados deste trabalho indicam que a nova abordagem permite facilitar a troca do serviço de teste, melhorar a eficiência e, principalmente, reduzir o esforço e os custos de manutenção na migração entre fornecedores de TaaS. Os estudos realizados no experimento controlado são promissores e podem auxiliar os engenheiros de software na tomada de decisão quanto aos riscos associados ao vendor lock-in no TaaS. Por fim, a abordagem Multi-TaaS contribui, principalmente, para a portabilidade dos testes automatizados na nuvem e da sumarização dos resultados dos testes e, consequentemente, possibilita que o modelo de serviço TaaS na computação em nuvem seja amplamente adotado, de forma consciente, no futuro.
The automation of software testing involves high costs in large-scale systems, since it requires complex test scenarios and extremely long execution times. Moreover, each of its steps demands computational resources and considerable time for running many test cases, which makes it a bottleneck for Information Technology (IT) companies. The benefits and opportunities offered by the combination of cloud computing and Testing as a Service (TaaS), considered a new business and service model, can reduce the execution time of tests in a cost-effective way and improve Return on Investment (ROI). However, the lock-in problem, i.e., the imprisonment of the user in the platform of a specific vendor or test service caused by the difficult migration from one TaaS provider to another limits the effective use of such new technologies and prevents the widespread adoption of TaaS. As studies conducted are neither rigorous, nor conclusive, and mainly due to the lack of empirical evidence, many issues must be investigated from the perspective of migration among TaaS providers. This research aims at reductions in the impact of the vendor lock-in problem on the automation process of testing in cloud computing, writing, configuration, execution and management of automated test results. The prototype of the Multi- TaaS approach was developed through a Java library as a proof of concept. The Multi-TaaS approach is an abstraction layer and its architecture enables the abstraction and flexibilization of the exchange of TaaS providers in a portable way, once the complexity of the software engineers implementation can be encapsulated. The two main advantages of Multi-TaaS are the decoupling of the automated test from the TaaS platform on which it will be executed and the abstraction of the communication and integration aspects among the proprietary REST APIs of the different TaaS providers. The approach also enables the summarization of automated test results independently of the underlying TaaS platform technologies. A comparative evaluation between Multi-TaaS and conventional migration approaches regarding the difficulty, efficiency, effectiveness and effort of migration among TaaS providers was conducted through controlled experiments.The results show the approach facilitates the exchange of test service, improves efficiency and reduces the effort and maintenance costs of migration among TaaS providers. The studies conducted in the controlled experiment are promising and can assist software engineers in decision-making regarding the risks associated with vendor lock-in in TaaS. The Multi-TaaS approach contributes mainly to the portability of automated tests in the cloud and summarization of their results. Finally, this research enables also the widespread adoption of the TaaS service model in cloud computing, consciously, in the future.
APA, Harvard, Vancouver, ISO, and other styles
34

Karim, Yacin. "Vers une vérification expérimentale de la théorie de la relativité restreinte : réplication des expériences de Charles-Eugène Guye (1907-1921)." Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00839315.

Full text
Abstract:
Nous nous intéressons dans cette thèse à un aspect assez peu documenté de l'histoire de la théorie de la relativité restreinte, la recherche d'une vérification expérimentale de ses prédictions sur la variation de l'inertie en fonction de leur vitesse. Nous complétons les études historiques antérieures sur les expériences de Kaufmann (1906) et de Bucherer (1908), et montrons que la vérification de la formule de Lorentz-Einstein constitue encore un enjeu expérimental après 1911. Nous étudions plus particulièrement les recherches dirigées par Charles-Eugène Guye en collaboration avec ses étudiants Simon Ratnowsky (1907-1910) et Charles Lavanchy (1913-1915). Nous montrons que la seconde phase de ce travail est très largement considérée dans les années 1920 comme la vérifiation expérimentale la plus précise de la formule de Lorentz-Einstein. Nous utilisons la méthode de réplication, appliquée à l'expérience de Guye et Lavanchy. La très grande maîtrise de l'émission cathodique, associée à une méthode d'investigation spécifique, leur permet de surmonter toutes les difficultés identifiées alors comme préjudiciables au succès de ce type d'expérience.
APA, Harvard, Vancouver, ISO, and other styles
35

Carvalho, Roberto Pires de. "Sistemas de arquivos paralelos: alternativas para a redução do gargalo no acesso ao sistema de arquivos." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-23052006-182520/.

Full text
Abstract:
Nos últimos anos, a evolução dos processadores e redes para computadores de baixo custo foi muito maior se comparada com o aumento do desempenho dos discos de armazenamento de dados. Com isso, muitas aplicações estão encontrando dificuldades em atingir o pleno uso dos processadores, pois estes têm de esperar até que os dados cheguem para serem utilizados. Uma forma popular para resolver esse tipo de empecílio é a adoção de sistemas de arquivos paralelos, que utilizam a velocidade da rede local, além dos recursos de cada máquina, para suprir a deficiência de desempenho no uso isolado de cada disco. Neste estudo, analisamos alguns sistemas de arquivos paralelos e distribuídos, detalhando aqueles mais interessantes e importantes. Por fim, mostramos que o uso de um sistema de arquivos paralelo pode ser mais eficiente e vantajoso que o uso de um sistema de arquivos usual, para apenas um cliente.
In the last years, the evolution of the data processing power and network transmission for low cost computers was much bigger if compared to the increase of the speed of getting the data stored in disks. Therefore, many applications are finding difficulties in reaching the full use of the processors, because they have to wait until the data arrive before using. A popular way to solve this problem is to use a parallel file system, which uses the local network speed to avoid the performance bottleneck found in an isolated disk. In this study, we analyze some parallel and distributed file systems, detailing the most interesting and important ones. Finally, we show the use of a parallel file system can be more efficient than the use of a usual local file system, for just one client.
APA, Harvard, Vancouver, ISO, and other styles
36

Koelemeijer, Dorien. "The Design and Evaluation of Ambient Displays in a Hospital Environment." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23601.

Full text
Abstract:
Hospital environments are ranked as one of the most stressful contemporary work environments for their employees, and this especially concerns nurses (Nejati et al. 2016). One of the core problems comprises the notion that the current technology adopted in hospitals does not support the mobile nature of medical work and the complex work environment, in which people and information are distributed (Bardram 2003). The employment of inadequate technology and the strenuous access to information results in a decrease in efficiency regarding the fulfilment of medical tasks, and puts a strain on the attention of the medical personnel. This thesis proposes a solution to the aforementioned problems through the design of ambient displays, that inform the medical personnel with the health statuses of patients whilst requiring minimal allocation of attention. The ambient displays concede a hierarchy of information, where the most essential information encompasses an overview of patients’ vital signs. Data regarding the vital signs are measured by biometric sensors and are embodied by shape-changing interfaces, of which the ambient displays consist. User-authentication permits the medical personnel to access a deeper layer within the hierarchy of information, entailing clinical data such as patient EMRs, after gesture-based interaction with the ambient display. The additional clinical information is retrieved on the user’s PDA, and can subsequently be viewed in more detail, or modified at any place within the hospital.In this thesis, prototypes of shape-changing interfaces were designed and evaluated in a hospital environment. The evaluation was focused on the interaction design and user-experience of the shape-changing interface, the capabilities of the ambient displays to inform users through peripheral awareness, as well as the remote communication between patient and healthcare professional through biometric data. The evaluations indicated that the required attention allocated for the acquisition of information from the shape-changing interface was minimal. The interaction with the ambient display, as well as with the PDA when accessing additional clinical data, was deemed intuitive, yet comprised a short learning curve. Furthermore, the evaluations in situ pointed out that for optimised communication through the ambient displays, an overview of the health statuses of approximately eight patients should be displayed, and placed in the corridors of the hospital ward.
APA, Harvard, Vancouver, ISO, and other styles
37

Диденко, Дмитрий Георгиевич. "Мультиагентная система дискретно-событийного имитационного моделирования OpenGPSS." Doctoral thesis, 2010. https://ela.kpi.ua/handle/123456789/1062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jane-Ferng, Chiu, and 邱展逢. "Process-Replication Technique in Distributed Computing Systems." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/58116771130746686457.

Full text
Abstract:
碩士
國立臺灣科技大學
工程技術研究所
82
The paper presents a process-replication protocol which aims at providing fault-tolerance as well as performance im- provement to applications such as long-running and real-time tasks. Identical delivering order of messages are enforced on all replicas of a troupe using multicasts for inter- and intra-troupe communication. Detailed design of the protocol is given in the paper. The protocol is self-contained in the sense that crashes in a troupe is handled internally without affecting the operation of other troupes. Crash-handling pro- cedure is simple and associated overhead during fail-free operation is small. The protocol takes advantages of the re- dundancy of processes to expedite the completion of a distri- buted task by speeding up the determination of message sequen- ces and transmission of outgoing data messages at the expense of small control messages. Simulation is carried out to show the performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
39

"Transaction replication in mobile environments." Chinese University of Hong Kong, 1996. http://library.cuhk.edu.hk/record=b5888779.

Full text
Abstract:
by Lau Wai Kwong.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.
Includes bibliographical references (leaves 99-102).
Abstract --- p.ii
Acknowledgements --- p.iv
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Limitations of the Mobile Computing Environments --- p.2
Chapter 1.2 --- Applications of Transaction Replication in Mobile Environments --- p.5
Chapter 1.3 --- Motivation for Transaction Replication in Mobile Environments --- p.5
Chapter 1.4 --- Major Simulation Results --- p.6
Chapter 1.5 --- Roadmap to the Thesis --- p.7
Chapter 2 --- Previous and Related Research --- p.8
Chapter 2.1 --- File Systems --- p.8
Chapter 2.1.1 --- Management of Replicated Files --- p.8
Chapter 2.1.2 --- Disconnected Operations --- p.10
Chapter 2.2 --- Database Management --- p.12
Chapter 2.2.1 --- Data Replication Schemes --- p.12
Chapter 2.2.2 --- Cache Invalidation and Query Processing --- p.15
Chapter 2.2.3 --- Transaction Management in Mobile Environments --- p.17
Chapter 3 --- System Model and Assumptions --- p.21
Chapter 3.1 --- System Architecture --- p.21
Chapter 3.2 --- Transaction and Data Model --- p.23
Chapter 3.3 --- One-copy Serializability --- p.25
Chapter 3.4 --- Assumptions --- p.27
Chapter 4 --- Transaction Replication in a Mobile Environment --- p.29
Chapter 4.1 --- Read-only Public Transactions --- p.30
Chapter 4.1.1 --- Data Broadcasting --- p.31
Chapter 4.1.2 --- Cache Update --- p.33
Chapter 4.1.3 --- Cache Miss --- p.36
Chapter 4.1.4 --- Execution of Read-only Public Transactions --- p.37
Chapter 4.2 --- R/W Public Transactions --- p.39
Chapter 4.3 --- Correctness Argument --- p.41
Chapter 4.3.1 --- Correctness Proof --- p.43
Chapter 4.4 --- Extension to Support Partition Failures --- p.47
Chapter 5 --- Design and Implementation of the Simulation --- p.49
Chapter 5.1 --- CSIM Language --- p.49
Chapter 5.2 --- Simulation Components --- p.50
Chapter 5.2.1 --- Fixed Network --- p.50
Chapter 5.2.2 --- Mobile Host --- p.50
Chapter 5.2.3 --- Wireless Channel --- p.51
Chapter 5.2.4 --- Database and Transactions --- p.52
Chapter 5.3 --- A Lock-based Scheme --- p.53
Chapter 5.4 --- Graphing ...........、 --- p.54
Chapter 6 --- Results and Analysis --- p.55
Chapter 6.1 --- Results Dissection --- p.55
Chapter 6.2 --- Performance of the Scheme --- p.56
Chapter 6.2.1 --- Parameters Setting --- p.56
Chapter 6.2.2 --- Experiments and Results --- p.59
Chapter 6.3 --- Comparison with the Lock-based Scheme --- p.78
Chapter 6.3.1 --- Parameters Setting --- p.79
Chapter 6.3.2 --- Experiments and Results --- p.80
Chapter 7 --- Conclusions and Future Work --- p.93
Chapter 7.1 --- Conclusions --- p.93
Chapter 7.2 --- Future Work --- p.94
Chapter A --- Implementation Details --- p.96
Bibliography --- p.99
APA, Harvard, Vancouver, ISO, and other styles
40

徐敏原. "Iformation Criterion for the analysis of Factorial Experiment without Replication." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/59916709629835899999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

"Mental contamination: a replication and extension of the "dirty kiss" experiment." Thesis, 2010. http://library.cuhk.edu.hk/record=b6075007.

Full text
Abstract:
Discussion: This study aims at expanding the understanding of mental contamination. First, the dirty kiss experiment is independently replicated in a Chinese population. Second, contact contamination and mental contamination are found to be separable and do not interact with each other. This underscores the independence of the two forms of contamination. Third, betrayal is shown to evoke mental contamination. Discussion has been made on the potential link between psychological violation, morality and mental contamination.
Mental contamination, an important phenomenon in OCD, refers to a sense of dirtiness without any contact with objectively dirty contaminant. However, the concept of mental contamination has not been thoroughly researched and there is an impending need for a psychological model to explain the phenomenon.
Method: Participants were assessed on questionnaires after imagining a non-consensual kiss or betrayal.
Objectives: The overall goal of this study is to enhance our understanding about mental contamination. Based on an experimental paradigm developed by Fairbrother, Newth, and Rachman (2005), three experiments are designed. The first experiment aims at replicating the results of the original study in local Chinese women. The second experiment examines the relationship between contact and mental contamination. The third experiment investigates the presence of mental contamination in persons experiencing betrayal.
Results: In Experiment 1, with an imagined non-consensual kiss, feeling of dirtiness, urge to wash and negative emotions were reproduced. In Experiment 2, it illustrated that either kissing a physically dirty looking man or being kissed non-consensually would experience stronger feeling of dirtiness, urge to wash and negative emotions. The last experiment showed that an imagined betrayal, a form of psychological violation, also induced a feeling of dirtiness, washing urge and negative emotions as with an imagined non-consensual kiss.
Three different pools of adult female participants were recruited for each experiment. In Experiment 1, 72 participants were recruited and randomly assigned to either a consensual kiss or a non-consensual kiss condition. In Experiment 2, 122 participants were recruited and randomly assigned to one of the four conditions. In Experiment 3, a total of 64 participants were recruited and randomly assigned to either non-betrayal or betrayal condition.
Kwok, Pui Ling Amy.
Adviser: Patrick Leung.
Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 166-177).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract and appendixes 1-3, 5-8 also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
42

Barsoum, Ayad Fekry. "Replication, Security, and Integrity of Outsourced Data in Cloud Computing Systems." Thesis, 2013. http://hdl.handle.net/10012/7348.

Full text
Abstract:
In the current era of digital world, the amount of sensitive data produced by many organizations is outpacing their storage ability. The management of such huge amount of data is quite expensive due to the requirements of high storage capacity and qualified personnel. Storage-as-a-Service (SaaS) offered by cloud service providers (CSPs) is a paid facility that enables organizations to outsource their data to be stored on remote servers. Thus, SaaS reduces the maintenance cost and mitigates the burden of large local data storage at the organization's end. For an increased level of scalability, availability and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and these copies remain intact. In this thesis we address the problem of creating multiple copies of a data file and verifying those copies stored on untrusted cloud servers. We propose a pairing-based provable multi-copy data possession (PB-PMDP) scheme, which provides an evidence that all outsourced copies are actually stored and remain intact. Moreover, it allows authorized users (i.e., those who have the right to access the owner's file) to seamlessly access the file copies stored by the CSP, and supports public verifiability. We then direct our study to the dynamic behavior of outsourced data, where the data owner is capable of not only archiving and accessing the data copies stored by the CSP, but also updating and scaling (using block operations: modification, insertion, deletion, and append) these copies on the remote servers. We propose a new map-based provable multi-copy dynamic data possession (MB-PMDDP) scheme that verifies the intactness and consistency of outsourced dynamic multiple data copies. To the best of our knowledge, the proposed scheme is the first to verify the integrity of multiple copies of dynamic data over untrusted cloud servers. As a complementary line of research, we consider protecting the CSP from a dishonest owner, who attempts to get illegal compensations by falsely claiming data corruption over cloud servers. We propose a new cloud-based storage scheme that allows the data owner to benefit from the facilities offered by the CSP and enables mutual trust between them. In addition, the proposed scheme ensures that authorized users receive the latest version of the outsourced data, and enables the owner to grant or revoke access to the data stored by cloud servers.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Szu-Yi, and 李思儀. "A Heuristic Data Replication Algorithm With Scalability Consideration In Cloud Computing Systems." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/07638429847841375827.

Full text
Abstract:
碩士
輔仁大學
資訊工程學系
100
In cloud computing systems, many applications have intensive disk data accesses. To continuously execute applications after data corruption, we propose a heuristic data replication algorithm with polynomial time for the cloud computing system. In the cloud computing system, the number of nodes is usually large. It is impossible that all the nodes have the same performance. This also means that the cloud computing system owns the node heterogeneity characteristic. In addition, the applications run in the cloud computing system also have different QoS requirements. Unlike the previous replication algorithms used in the cloud computing system, the proposed replication algorithm particularly considers the node heterogeneity and application QoS. It is also well-known that the number of nodes in the cloud computing system is usually large. The node scalability issue is also concerned in the heuristic replication algorithm. The algorithm adopts the node combination technique to deal with the node scalability issue. Finally, we perform simulation experiments to compare the proposed replication algorithm with previous replication algorithms. The simulation results show that the proposed algorithm has better performance in various metrics.
APA, Harvard, Vancouver, ISO, and other styles
44

Yen, Chi Ming, and 嚴智民. "An Efficient Replication Algorithm for Cloud Computing Systems with QoS and Heterogeneous Considerations." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38918666997775786317.

Full text
Abstract:
碩士
輔仁大學
資訊工程學系
99
In cloud computing, there is a huge amount of data in storage devices. It is an important issue how to ensure the data continues to be available when failures occur in storage devices. Data replication is a widely used technique. This thesis presents an efficient data replication algorithm for cloud computing systems. The algorithm considers the QoS requirements of applications (e.g. response time) and the heterogeneous characteristics of devices (e.g. different transmission rates, data access time, storage capacities, etc). There are two stages in the proposed algorithm. In the pre-processing stage, data replication requests are modeled as a weighted bipartite graph for considering QoS requirements and device heterogeneities. In the post-processing stage, the weighted bipartite graph is extended as a flow graph. Using the flow graph, the data replication problem can be optimally solved by transforming this problem to the well-known minimum cost flow problem. Finally, this thesis performs simulation experiments to demonstrate the effectiveness of the proposed algorithm in the replication cost and recovery time.
APA, Harvard, Vancouver, ISO, and other styles
45

Ko, Chun-Chuan, and 柯俊全. "Computing the Number of Schemata to Use in a Code-Schema Development Experiment." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/x26365.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
92
One way to summarize the cognitive development theory of Piaget may be as follows. When humans learn or try to solve problems, cognitive structures (called schemas) may form as a result. The development of schemas constantly goes through two processes : assimilation and accommodation. Assimilation occurs when new experiences “match with” existing schemas. Accommodation occurs when new experiences are in conflict with existing schemas, and this essentially means that existing schemas would have to be adjusted and/or modified in order that the new experiences can be successfully “accommodated”. In order for schemas to be developed, we must be able to abstract from our experiences. Though different individuals may have different abilities for making abstractions, the complexity of the real environment also plays a part in this. In fact, the inherent complexity of the real environment can make it very difficult, if not impossible, for an individual to make abstractions. Mainly, this is due to the variety and amount of information that we receive in a fixed time interval. If the varieties are many and the amount is huge, then obviously it can be very difficult for an individual to assess the relatedness of all the given information. And if the individual has problems assessing the relatedness of all the given information, then surely the individual will not be able to abstract schemas from the given information. To help the learner improve his/her cognitive abilities for making abstractions, we constructed a computer-assisted learning system called CSD (for Code Schema Development). Compared with the real environment in which the complexity of the problems-to-solve is potentially huge and uncontrollable, CSD is an artificial problem-solving environment in which the problem complexity is controllable and can be made very small. The goal of CSD is to control the complexity of the assigned problems (all programming exercises) so that (1) the learner can successfully develop the intended code schemas, and (2) the learner’s cognitive abilities for developing code schemas can be improved. In order for the learner’s cognitive abilities for developing code schemas to be improved, we need to increase the problem complexity as much as possible. But on the other hand, the complexity of the problems should not go beyond the learner’s current cognitive abilities. The purpose of this research is to find an acceptable way of doing so. Basically, we propose two methods for adjusting, for each learner, the problem complexity of CSD. One is based on the learner’s performance, and is called the performance method. The other is based on the learner’s learning effort, and is called the LE method. Our preliminary findings suggest that the performance method seem to satisfy our criteria of stability. However, more research is needed in order to fully justify the acceptability of these two methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Plavec, Franjo. "Stream Computing on FPGAs." Thesis, 2010. http://hdl.handle.net/1807/24855.

Full text
Abstract:
Field Programmable Gate Arrays (FPGAs) are programmable logic devices used for the implementation of a wide range of digital systems. In recent years, there has been an increasing interest in design methodologies that allow high-level design descriptions to be automatically implemented in FPGAs. This thesis describes the design and implementation of a novel compilation flow that implements circuits in FPGAs from a streaming programming language. The streaming language supported is called FPGA Brook, and is based on the existing Brook and GPU Brook languages, which target streaming multiprocessors and graphics processing units (GPUs), respectively. A streaming language is suitable for targeting FPGAs because it allows system designers to express applications in a way that exposes parallelism, which can then be exploited through parallel hardware implementation. FPGA Brook supports replication, which allows the system designer to trade-off area for performance, by specifying the parts of an application that should be implemented as multiple hardware units operating in parallel, to achieve desired application throughput. Hardware units are interconnected through FIFO buffers, which effectively utilize the small memory modules available in FPGAs. The FPGA Brook design flow uses a source-to-source compiler, and combines it with a commercial behavioural synthesis tool to generate hardware. The source-to-source compiler was developed as a part of this thesis and includes novel algorithms for implementation of complex reductions in FPGAs. The design flow is fully automated and presents a user-interface similar to traditional software compilers. A suite of benchmark applications was developed in FPGA Brook and implemented using our design flow. Experimental results show that applications implemented using our flow achieve much higher throughput than the Nios II soft processor implemented in the same FPGA device. Comparison to the commercial C2H compiler from Altera shows that while simple applications can be effectively implemented using the C2H compiler, complex applications achieve significantly better throughput when implemented by our system. Performance of many applications implemented using our design flow would scale further if a larger FPGA device were used. The thesis demonstrates that using an automated design flow to implement streaming applications in FPGAs is a promising methodology.
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Li-Chieh, and 張立傑. "A Dynamic Data Replication Mechanism Using Blocking Probability And Reference Queue for Achieving Load Balance in Cloud Computing Environment." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3un4p7.

Full text
Abstract:
碩士
朝陽科技大學
資訊與通訊系
102
With the exploded development of Internet, the applications of network services increase more and more. To provide convenient and quick network services, cloud providers need a more stable and high-capacity systems to provide services, a new kind of cloud computing system, which consists of a huge number of processors and memories, high-speed networks, and various applications, is proposed to users via Internet. Cloud computing is a powerful and high-capacity system, because it can satisfy various demands and share a number of resources for users. The cloud system also has the characters of scalability, efficiency, fault tolerant, and storage. Therefore, several Internet-related companies invest much money and time in cloud computing systems, such as Google, IBM Blue Cloud, and Amazon. To satisfy those demands of users, the techniques of virtualization and data replication can be used to reduce the cost of cloud computing services. Besides, the efficient data replication is often to apply to reduce the loading of nodes and improve reliability of cloud storage system. Therefore, this paper proposes a three phase access scheme of data replication to configure replicas for the cloud storage system, called dynamic data replication algorithms (DDRA). The first and second phases are Anti-Blocking Probability Selection phase (ABPS) and Reference Queue Balance phase (RQLB) phase, can provide users to access to a replica from the adapted service nodes according to the loads of nodes for achieve the initial load balance. Besides, for achieving better performance of access and load balance of nodes, the dynamic replication configuration is used in the final phase, the Update Strategy phase (US). According to reasons above, the proposed method can enhance the availability, access efficiency, and achieve the load balance in our hierarchical cloud computing environment.
APA, Harvard, Vancouver, ISO, and other styles
48

Schürmann, Felix [Verfasser]. "Exploring liquid computing in a hardware adaptation : construction and operation of a neural network experiment / presented by Felix Schürmann." 2005. http://d-nb.info/975516639/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

(11132985), Thamir Qadah. "High-performant, Replicated, Queue-oriented Transaction Processing Systems on Modern Computing Infrastructures." Thesis, 2021.

Find full text
Abstract:
With the shifting landscape of computing hardware architectures and the emergence of new computing environments (e.g., large main-memory systems, hundreds of CPUs, distributed and virtualized cloud-based resources), state-of-the-art designs of transaction processing systems that rely on conventional wisdom suffer from lost performance optimization opportunities. This dissertation challenges conventional wisdom to rethink the design and implementation of transaction processing systems for modern computing environments.

We start by tackling the vertical hardware scaling challenge, and propose a deterministic approach to transaction processing on emerging multi-sockets, many-core, shared memory architecture to harness its unprecedented available parallelism. Our proposed priority-based queue-oriented transaction processing architecture eliminates the transaction contention footprint and uses speculative execution to improve the throughput of centralized deterministic transaction processing systems. We build QueCC and demonstrate up to two orders of magnitude better performance over the state-of-the-art.

We further tackle the horizontal scaling challenge and propose a distributed queue-oriented transaction processing engine that relies on queue-oriented communication to eliminate the traditional overhead of commitment protocols for multi-partition transactions. We build Q-Store, and demonstrate up to 22x improvement in system throughput over the state-of-the-art deterministic transaction processing systems.

Finally, we propose a generalized framework for designing distributed and replicated deterministic transaction processing systems. We introduce the concept of speculative replication to hide the latency overhead of replication. We prototype the speculative replication protocol in QR-Store and perform an extensive experimental evaluation using standard benchmarks. We show that QR-Store can achieve a throughput of 1.9 million replicated transactions per second in under 200 milliseconds and a replication overhead of 8%-25%compared to non-replicated configurations.
APA, Harvard, Vancouver, ISO, and other styles
50

Magradze, Erekle. "Monitoring and Optimization of ATLAS Tier 2 Center GoeGrid." Doctoral thesis, 2016. http://hdl.handle.net/11858/00-1735-0000-0028-86DB-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography