Dissertations / Theses on the topic 'Shared systems'

To see the other types of publications on this topic, follow the link: Shared systems.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Shared systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gutekunst, Thomas F. "Shared window systems /." [S.l.] : [s.n.], 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=11120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jacob, Jeremy. "On shared systems." Thesis, University of Oxford, 1987. http://ora.ox.ac.uk/objects/uuid:a17b30b9-eef5-4db2-8420-6df3cf3f8175.

Full text
Abstract:
Most computing systems are shared between users of various kinds. This thesis treats such systems as mathematical objects, and investigates two of their properties: refinement and security. The first is the analysis of the conditions under which one shared system can be replaced by another, the second the determination of a measure of the information flow through a shared system. Under the heading of refinement we show what it means for one shared system to be a suitable replacement for another, both in an environment of co-operating users and in an environment of independent users. Both refine- ment relations are investigated, and a large example is given to demonstrate the relation for cooperating users. We show how to represent the security of a shared system as an 'inference function', and define several security properties in terms of such functions. A partial order is defined on systems, with the meaning 'at least as secure as'. We generalise inference functions to produce 'security specifications' which can be used to capture the desired degree of security in any shared system. We define what it means for a shared system to meet a security specification and indicate how implementations may be derived from their specifications in some cases. A summary of related work is given.
APA, Harvard, Vancouver, ISO, and other styles
3

Prescher, Daniel. "Shared Service Center : Att införa ett Shared Service Center." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-22481.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chemla, Daniel, and Daniel Chemla. "Algorithms for optimizing shared mobility systems." Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00839521.

Full text
Abstract:
Bikes sharing systems have known a growing success all over the world. Several attempts have been made since the 1960s. The latest developments in ICT have enabled the system to become efficient. People can obtain real-time information about the position of the vehicles. More than 200 cities have already introduced the system and this trend keeps on with the launching of the NYC system in spring 2013. A new avatar of these means of transportation has arrived with the introduction of Autolib in Paris end of 2011.The objective of this thesis is to propose algorithms that may help to improve this system efficiency. Indeed, operating these systems induces several issues, one of which is the regulation problem. Regulation should ensures users that a right number of vehicles are present at any station anytime in order to fulfill the demand for both vehicles and parking racks. This regulation is often executed thanks to trucks that are travelling the city. This regulation issue is crucial since empty and full stations increase users' dissatisfaction. Finding the optimal strategy for regulating a network appears to be a difficult question. This thesis is divided into two parts. The first one deals with the "static" case. In this part, users' impact on the network is neglected. This is the case at night or when the system is closed. The operator faces a given repartition of the vehicles. He wants the repartition to match a target one that is known a priori. The one-truck and multiple-truck balancing problems are addressed in this thesis. For each one, an algorithm is proposed and tested on several instances. To deal with the "dynamic" case in which users interact with the system, a simulator has been developed. It is used to compare several strategies and to monitor redistribution by using trucks. Strategies not using trucks, but incentive policies are also tested: regularly updated prices are attached to stations to deter users from parking their vehicle at specified stations. At last, the question to find the best initial inventory is also addressed. It corresponds to the case when no truck are used within the day. Two local searches are presented and both aim at minimizing the total time lost by users in the system. The results obtained can be used as inputs for the target repartitions used in the first part. During my thesis, I participated to two EURO-ROADEF challenges, the 2010 edition proposed by EDF and the 2012 one by Google. In both case, my team reached the final phase. In 2010, our method was ranked fourth over all the participants and led to the publication of an article. In 2012, we ranked eighteenth over all the participants. Both works are added in the appendix
APA, Harvard, Vancouver, ISO, and other styles
5

Beal, Jacob Stuart Michael. "Generating communication systems through shared context." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chemla, Daniel. "Algorithms for optimizing shared mobility systems." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1066/document.

Full text
Abstract:
Les systèmes de vélos en libre-service ont connu ces dernières années un développement sans précédent. Bien que les premières tentatives de mise en place remontent aux années 60, l'arrivée de technologies permettant un suivi des différents véhicules mis à la disposition du grand public et de l'état des bornes de stationnement en temps réel a rendu ces systèmes plus attractifs. Plus de 200 villes disposent de tels systèmes et cette tendance se poursuit avec l'entrée en fonctionnement du système de New York prévue pour mars 2013. La fin de l'année 2011 a été marquée par l'arrivée d'un nouvel avatar de ce type de transport avec la mise en place d'Autolib à Paris. L'objectif de cette thèse est de proposer des algorithmes d'aide à la décision pour l'optimisation de réseaux de transport en libre-service. L'exploitation de ces systèmes, qui fleurissent actuellement un peu partout dans le monde, pose en effet de nombreux problèmes, l'un des plus cruciaux étant celui de la régulation. Cette dernière a pour objectif de maintenir dans chaque station un nombre de vélos ni trop faible, ni trop élevé, afin de satisfaire au mieux la demande. Cette régulation se fait souvent par le biais de camions qui effectuent des tournées sur le réseau. Il apparaît rapidement que la question d'une régulation optimale à l'aide d'une flotte fixée de camions est une question difficile. La thèse est divisée en deux parties. Dans la première partie, le cas “statique” est considéré. Les déplacements de véhicules dus aux usagers sont négligés. Cela traduit la situation la nuit ou lorsque le système est fermé à la location. L'opérateur doit redistribuer les véhicules afin que ceux-ci soient disposés selon une répartition définie. Les problèmes de rééquilibrage avec un ou plusieurs camions sont traités. Pour chacun des deux cas, un algorithme est proposé et utilisé pour résoudre des instances de tailles variées. La seconde partie traite du cas “dynamique” dans lequel les utilisateurs interagissent avec le système. Afin d'étudier ce système complexe, un simulateur a été développé. Il est utilisé pour comparer différentes stratégies de redistribution des véhicules. Certaines utilisent des camions se déplaçant dans la ville pendant la journée. D'autres tentent d'organiser une régulation intrinsèque du système par le biais d'une politique d'incitation : des prix mis à jour régulièrement encouragent les usagers à rendre leur véhicule dans certaines stations. Enfin, si on choisit de ne pas utiliser de camion durant la journée, la question de la détermination du nombre optimal de véhicules à disposer à chaque station se pose. Deux méthodes de recherche locale visant à minimiser le temps total perdu par les usagers sont présentées. Les résultats obtenus peuvent servir pour la définition des répartitions cibles de la première partie. Durant ma thèse, j'ai pu participer à deux challenges EURO/ROADEF, celui de 2010 proposé par EDF et celui de 2012 proposé par Google. Dans les deux cas, mon équipe a atteint les phases finales. Lors de l'édition de 2010, notre méthode est arrivée quatrième et a donné lieu à une publication. En 2012, notre méthode est arrivée dix-huitième sur tous les participants. Les travaux menés dans ces cadres sont ajoutés en annexe
Bikes sharing systems have known a growing success all over the world. Several attempts have been made since the 1960s. The latest developments in ICT have enabled the system to become efficient. People can obtain real-time information about the position of the vehicles. More than 200 cities have already introduced the system and this trend keeps on with the launching of the NYC system in spring 2013. A new avatar of these means of transportation has arrived with the introduction of Autolib in Paris end of 2011.The objective of this thesis is to propose algorithms that may help to improve this system efficiency. Indeed, operating these systems induces several issues, one of which is the regulation problem. Regulation should ensures users that a right number of vehicles are present at any station anytime in order to fulfill the demand for both vehicles and parking racks. This regulation is often executed thanks to trucks that are travelling the city. This regulation issue is crucial since empty and full stations increase users' dissatisfaction. Finding the optimal strategy for regulating a network appears to be a difficult question. This thesis is divided into two parts. The first one deals with the “static” case. In this part, users' impact on the network is neglected. This is the case at night or when the system is closed. The operator faces a given repartition of the vehicles. He wants the repartition to match a target one that is known a priori. The one-truck and multiple-truck balancing problems are addressed in this thesis. For each one, an algorithm is proposed and tested on several instances. To deal with the “dynamic” case in which users interact with the system, a simulator has been developed. It is used to compare several strategies and to monitor redistribution by using trucks. Strategies not using trucks, but incentive policies are also tested: regularly updated prices are attached to stations to deter users from parking their vehicle at specified stations. At last, the question to find the best initial inventory is also addressed. It corresponds to the case when no truck are used within the day. Two local searches are presented and both aim at minimizing the total time lost by users in the system. The results obtained can be used as inputs for the target repartitions used in the first part. During my thesis, I participated to two EURO-ROADEF challenges, the 2010 edition proposed by EDF and the 2012 one by Google. In both case, my team reached the final phase. In 2010, our method was ranked fourth over all the participants and led to the publication of an article. In 2012, we ranked eighteenth over all the participants. Both works are added in the appendix
APA, Harvard, Vancouver, ISO, and other styles
7

Jiang, Jingjing. "Shared-control for systems with constraints." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/33381.

Full text
Abstract:
In the thesis we solve the shared-control problem for three classes of systems: a class of linear mechanical systems, mobile robots and rear wheel drive cars, via full state feedback or output feedback while ensuring that all the state constraints on the closed-loop systems are satisfied. To design the feedback controller for a system with state constraints we firstly remove all the constraints by changing the coordinates through a logarithmic function. Then the back-stepping method is used to design the controller and a Lyapunov-like analysis is used to prove stability properties of the closed-loop system. The shared-control algorithm is based on a hysteresis switch which reduces oscillations when changing the control authority from the human operator to the feedback controller or vice-versa. Unlike other shared-control methods, formal properties of the closed-loop systems with the shared-control have been rigorously established. We start the design of the full state-feedback shared-controller with the assumption that the admissible Cartesian configuration set Pa of the system is a time-invariant convex set defined by a group of linear inequalities. Then the results are extended to the design of shared-controllers via output feedback. In the cases in which only output feedback is available, we can solve the problem by either developing an observer or 'remodeling' the system. Through system remodeling we are able to deal with any shape of the admissible configuration set Pa, even time-varying ones. Simulation results help to illustrate how the shared-controller works and show its effectiveness. The state of the closed-loop system with the shared-control never violates the constraints. Experiments done on a mobile robot also demonstrate that the shared-control algorithm works well in practice and meets all safety requirements. In addition, the experimental results match the simulation ones, indicating that the modeling approximations are reasonable and suitable.
APA, Harvard, Vancouver, ISO, and other styles
8

Wen, Yuzhong. "Replication of Concurrent Applications in a Shared Memory Multikernel." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/71813.

Full text
Abstract:
State Machine Replication (SMR) has become the de-facto methodology of building a replication based fault-tolerance system. Current SMR systems usually have multiple machines involved, each of the machines in the SMR system acts as the replica of others. However having multiple machines leads to more cost to the infrastructure, in both hardware cost and power consumption. For tolerating non-critical CPU and memory failure that will not crash the entire machine, there is no need to have extra machines to do the job. As a result, intra-machine replication is a good fit for this scenario. However, current intra-machine replication approaches do not provide strong isolation among the replicas, which allows the faults to be propagated from one replica to another. In order to provide an intra-machine replication technique with strong isolation, in this thesis we present a SMR system on a multi-kernel OS. We implemented a replication system that is capable of replicating concurrent applications on different kernel instances of a multi-kernel OS. Modern concurrent application can be deployed on our system with minimal code modification. Additionally, our system provides two different replication modes that allows the user to switch freely according to the application type. With the evaluation of multiple real world applications, we show that those applications can be easily deployed on our system with 0 to 60 lines of code changes to the source code. From the performance perspective, our system only introduces 0.23\% to 63.39\% overhead compared to non-replicated execution.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Hysing, Andreas Dreyer. "Parallel Seismic Inversion for Shared Memory Systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11795.

Full text
Abstract:
In this thesis will explore how a scientific application forseismic inversion can take advantage of multi-core programming on x86 architecture. The thesis will focus on most effective domain divisions, communication patterns and multithreaded scalability. Performance comparison withthe original codes will be included, as well as an evaluation of thedevelopment effort required for implementing such techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Bruce, Craig Steven. "Performance optimization for distributed-shared-data systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ32819.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Coe, Paul. "Simulation models of shared-memory multiprocessor systems." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/13437.

Full text
Abstract:
Multiprocessors have often been thought of as the solution to today's every increasing computing needs; but they are expensive, complex and difficult to design. This thesis focusses on the development of multiprocessor simulations that would aid the design and evaluation of such systems. The thesis starts by outlining the various possibilities for multiprocessor design and discusses some of the more common problems that must be addressed. A selection of simulation environments and models that have been developed to study complex computer systems are then described. The major problem with these simulation systems is that they generally focus on a small area of multiprocessor systems design in order to produce fast simulations that generate results quickly; consequently they provide very little flexibility and room for exploration. The aim of this project was to design and implement a flexible multiprocessor model within the HASE simulation environment, enabling the designer to explore a large design space with a minimum of effort, focussing more on flexibility and less on simulation speed. A parameterised simulation model has been developed that presents the designer with many design options with which to experiment. The parameters allow simple alternatives to be explored, for example, different component speeds or bus widths, as well as more complicated features, for example, coherence protocols, synchronisation primitives and architecture configurations. The model was designed in a modular manner that allows new parameter values to be incorporated, as well as new implementations of the various entities. To support this new model, the HASE system was extended to provide better support for multiprocessor modelling.
APA, Harvard, Vancouver, ISO, and other styles
12

Molka, Daniel. "Performance Analysis of Complex Shared Memory Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-221729.

Full text
Abstract:
Systems for high performance computing are getting increasingly complex. On the one hand, the number of processors is increasing. On the other hand, the individual processors are getting more and more powerful. In recent years, the latter is to a large extent achieved by increasing the number of cores per processor. Unfortunately, scientific applications often fail to fully utilize the available computational performance. Therefore, performance analysis tools that help to localize and fix performance problems are indispensable. Large scale systems for high performance computing typically consist of multiple compute nodes that are connected via network. Performance analysis tools that analyze performance problems that arise from using multiple nodes are readily available. However, the increasing number of cores per processor that can be observed within the last decade represents a major change in the node architecture. Therefore, this work concentrates on the analysis of the node performance. The goal of this thesis is to improve the understanding of the achieved application performance on existing hardware. It can be observed that the scaling of parallel applications on multi-core processors differs significantly from the scaling on multiple processors. Therefore, the properties of shared resources in contemporary multi-core processors as well as remote accesses in multi-processor systems are investigated and their respective impact on the application performance is analyzed. As a first step, a comprehensive suite of highly optimized micro-benchmarks is developed. These benchmarks are able to determine the performance of memory accesses depending on the location and coherence state of the data. They are used to perform an in-depth analysis of the characteristics of memory accesses in contemporary multi-processor systems, which identifies potential bottlenecks. However, in order to localize performance problems, it also has to be determined to which extend the application performance is limited by certain resources. Therefore, a methodology to derive metrics for the utilization of individual components in the memory hierarchy as well as waiting times caused by memory accesses is developed in the second step. The approach is based on hardware performance counters, which record the number of certain hardware events. The developed micro-benchmarks are used to selectively stress individual components, which can be used to identify the events that provide a reasonable assessment for the utilization of the respective component and the amount of time that is spent waiting for memory accesses to complete. Finally, the knowledge gained from this process is used to implement a visualization of memory related performance issues in existing performance analysis tools. The results of the micro-benchmarks reveal that the increasing number of cores per processor and the usage of multiple processors per node leads to complex systems with vastly different performance characteristics of memory accesses depending on the location of the accessed data. Furthermore, it can be observed that the aggregated throughput of shared resources in multi-core processors does not necessarily scale linearly with the number of cores that access them concurrently, which limits the scalability of parallel applications. It is shown that the proposed methodology for the identification of meaningful hardware performance counters yields useful metrics for the localization of memory related performance limitations.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Ping, and Andrew Dillon. "HCI and MIS: shared concerns (Editorial)." Elsevier, 2003. http://hdl.handle.net/10150/106118.

Full text
Abstract:
The fields of HCI and MIS share many concerns but have traditionally not shared literatures, theories and results. This special issue is a first attempt at bridging the disciplinary divide. In this paper, the history of both fields is briefly outlined and reasons for the independence of eachare examined. The criteria for paper inclusion are outlined and each paper is briefly introduced.
APA, Harvard, Vancouver, ISO, and other styles
14

Prytz, Erik. "Performance and Shared Understanding in Mixed C2-Systems." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56430.

Full text
Abstract:
OBJECTIVE: This thesis had two purposes. The main one was to examine how mixed conditions affect a Command & Control (C2) system, particularly in terms of shared understanding, situation awareness (SA), performance and workload. Mixed conditions refer here to when subsystems of a larger C2-system differ in terms of capabilities, particularly those capabilities influencing the understanding of a situation e.g. sensors or communication, which could affect the C2-capabilities when working toward a common goal. The second purpose of this thesis was to investigate a newly developed tool for measuring shared understanding, Shared Priorities, in terms of validity and usefulness. METHOD: A number of hypotheses were constructed and investigated by a controlled experiment using a microworld, C3Fire, where two-man teams fought a simulated forest fire. The independent variable manipulated was the type of support system used. One condition used one computer interface per participant, the second was mixed conditions where one participant used the computer interface and one used a paper map, and the last condition was both participants using a paper map. Different questionnaires developed to measure SA, workload etc. was used to measure the dependent variables. RESULTS: The statistical analysis performed on the collected data showed that the performance and SA was comparatively better when both participants used the computer interface than the mixed condition, which in turn was better than when both participants used a paper map. For workload and teamwork, no differences between the mixed condition and the dual map condition were found. As for the Shared Priorities measurement, no differences were found between any of the conditions. CONCLUSION: A C2-system in which some additional capabilities are introduced for some but not all subsystems may not benefit in some regards, e.g. workload and teamwork, but could improve in others, e.g. SA and performance. A Structural Equation Model (SEM) shows that the theoretical constructs of SA, workload, teamwork and performance are related and affect each other, so that the workload of the system negatively affects the teamwork and SA, while the teamwork may affect SA positively and a high SA enables high performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Gong, Yan. "Locking granularity in shared-nothing parallel database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ36879.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ruppert, Eric. "The consensus power of shared-memory distributed systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0028/NQ49848.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wahbe, Andrew A. "Linearizable shared objects for asynchronous message passing systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0024/MQ50378.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hsieh, Wilson Cheng-Yi. "Dynamic computation migration in distributed shared memory systems." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36635.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Vita.
Includes bibliographical references (p. 123-131).
by Wilson Cheng-Yi Hsieh.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
19

Kouzoupis, Antonios. "High performance shared state schedulers." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196145.

Full text
Abstract:
Large organizations and research institutes store a huge volume of data nowadays.In order to gain any valuable insights distributed processing frameworks over acluster of computers are needed. Apache Hadoop is the prominent framework fordistributed storage and data processing. At SICS Swedish ICT we are building Hops, a new distribution of Apache Hadoop relying on a distributed, highly available MySQL Cluster NDB to improve performance. Hops-YARN is the resource management framework of Hops which introduces distributed resource management, load balancing the tracking of resources in a cluster. In Hops-YARN we make heavy usage of the back-end database storing all the resource manager metadata and incoming RPCs to provide high fault tolerance and very short recovery time. This project aims in optimizing the mechanisms used for persisting metadata in NDB both in terms of transactional commit time but also in terms of pre-processing them. Under no condition should the in-memory RM state diverge from the state stored in NDB. With these goals in mind several solutions were examined that improved the performance of the system, making Hops-YARN comparable to Apache YARN with the extra benefits of high-fault tolerance and short recovery time. The solutions proposed in this thesis project enhance the pure commit time of a transaction to the MySQL Cluster and the pre-processing and parallelism of our Transaction Manager. The results indicate that the performance of Hops increased dramatically, utilizing more resources on a cluster with thousands of machines. Increasing the cluster utilization by a few percentages can save organizations a big amount of money.
Nu för tiden lagrar stora organisationer och forskningsinstitutioner enorma mängder data.För att kunna utvinna någon värdefull information från dessa data behöver den bearbetasav ett kluster av datorer. När flera datorer gemensamt ska bearbeta data behöver de utgåfrån ett så kallat "distributed processing framework''. I dagsläget är Apache Hadoop detmest använda ramverket för distribuerad lagring och behandling av data. Detta examensarbeteär har genomförts vid SICS Swedish ICT där vi byggt Hops, en ny distribution avApache Hadoop som drivs av ett distribuerat MySQL Cluster NDB som erbjuder en hög tillgänglighet.Hops-YARN är Hops ramverk för resurshantering med distribuerade ResourceManagers som lastbalanserarderas ResourceTrackerService. I detta examensarbete använder vi Hops-Yarn på ett sätt där ``back-end''databasen flitigt används för att hantera ResourceManagerns metadata och inkommande RPC-anrop. Vårkonfiguration erbjuder en hög feltolerans och återställer sig mycket snabbt vidfelberäkningar. Vidare används NDB-klustrets Event API för att ResourceManager ska kunnakommunicera med den distribuerade ResourceTrackers. Detta projekt syftar till att optimera de mekanismer som används för ihållande metadatai NDB både i termer av transaktions begå tid men också i termer av pre-bearbeta dem medan samtidigt garantera enhetlighet i RM: s tillstånd. ResourceManagerns tillståndi RAM-minnet får under inga omständigheteravvika från det tillstånd som finns lagrat i NDB:n. Med dessa mål i åtanke undersöktes fleralösningar som förbättrar prestandan och därmed gör Hops-Yarn jämförbart med Apache YARN.De lösningar som föreslås i denna uppsats förbättrar “pure commit time” när en transaktiongörs i ett MySQL Cluster samt förbehandlingen och parallelismen i vår Transaction Manager.Resultaten tyder på att Hops prestanda ökade dramatiskt vilket ledde till ett effektivarenyttjande av tillgängliga resurser i ett kluster bestående av ett tusental datorer. Närnyttjandet av tillgänliga resurser i ett kluster förbättras med några få procent kanorganisationer spara mycket pengar.
APA, Harvard, Vancouver, ISO, and other styles
20

Ramesh, Bharath. "Samhita: Virtual Shared Memory for Non-Cache-Coherent Systems." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23687.

Full text
Abstract:
Among the key challenges of computing today are the emergence of many-core architectures and the resulting need to effectively exploit explicit parallelism. Indeed, programmers are striving to exploit parallelism across virtually all platforms and application domains. The shared memory programming model effectively addresses the parallelism needs of mainstream computing (e.g., portable devices, laptops, desktop, servers), giving rise to a growing ecosystem of shared memory parallel techniques, tools, and design practices. However, to meet the extreme demands for processing and memory of critical problem domains, including scientific computation and data intensive computing, computing researchers continue to innovate in the high-end distributed memory architecture space to create cost-effective and scalable solutions. The emerging distributed memory architectures are both highly parallel and increasingly heterogeneous. As a result, they do not present the programmer with a cache-coherent view of shared memory, either across the entire system or even at the level of an individual node. Furthermore, it remains an open research question which programming model is best for the heterogeneous platforms that feature multiple traditional processors along with accelerators or co-processors. Hence, we have two contradicting trends. On the one hand, programming convenience and the presence of shared memory     call for a shared memory programming model across the entire heterogeneous system. On the other hand, increasingly parallel and heterogeneous nodes lacking cache-coherent shared memory call for a message passing model. In this dissertation, we present the architecture of Samhita, a distributed shared memory (DSM) system that addresses the challenge of providing shared memory for non-cache-coherent systems. We define regional consistency (RegC), the memory consistency model implemented by Samhita. We present performance results for Samhita on several computational kernels and benchmarks, on both cluster supercomputers and heterogeneous systems. The results demonstrate the promising potential of Samhita and the RegC model, and include the largest scale evaluation by a significant margin for any DSM system reported to date.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Krishnaswamy, Vijaykumar. "Shared state management for time-sensitive distributed applications." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Argile, Andrew Duncan Stuart. "Distributed processing in decision support systems." Thesis, Nottingham Trent University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Costa, Prats Juan José. "Efficient openMP over sequentially consistent distributed shared memory systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/81012.

Full text
Abstract:
Nowadays clusters are one of the most used platforms in High Performance Computing and most programmers use the Message Passing Interface (MPI) library to program their applications in these distributed platforms getting their maximum performance, although it is a complex task. On the other side, OpenMP has been established as the de facto standard to program applications on shared memory platforms because it is easy to use and obtains good performance without too much effort. So, could it be possible to join both worlds? Could programmers use the easiness of OpenMP in distributed platforms? A lot of researchers think so. And one of the developed ideas is the distributed shared memory (DSM), a software layer on top of a distributed platform giving an abstract shared memory view to the applications. Even though it seems a good solution it also has some inconveniences. The memory coherence between the nodes in the platform is difficult to maintain (complex management, scalability issues, high overhead and others) and the latency of the remote-memory accesses which can be orders of magnitude greater than on a shared bus due to the interconnection network. Therefore this research improves the performance of OpenMP applications being executed on distributed memory platforms using a DSM with sequential consistency evaluating thoroughly the results from the NAS parallel benchmarks. The vast majority of designed DSMs use a relaxed consistency model because it avoids some major problems in the area. In contrast, we use a sequential consistency model because we think that showing these potential problems that otherwise are hidden may allow the finding of some solutions and, therefore, apply them to both models. The main idea behind this work is that both runtimes, the OpenMP and the DSM layer, should cooperate to achieve good performance, otherwise they interfere one each other trashing the final performance of applications. We develop three different contributions to improve the performance of these applications: (a) a technique to avoid false sharing at runtime, (b) a technique to mimic the MPI behaviour, where produced data is forwarded to their consumers and, finally, (c) a mechanism to avoid the network congestion due to the DSM coherence messages. The NAS Parallel Benchmarks are used to test the contributions. The results of this work shows that the false-sharing problem is a relative problem depending on each application. Another result is the importance to move the data flow outside of the critical path and to use techniques that forwards data as early as possible, similar to MPI, benefits the final application performance. Additionally, this data movement is usually concentrated at single points and affects the application performance due to the limited bandwidth of the network. Therefore it is necessary to provide mechanisms that allows the distribution of this data through the computation time using an otherwise idle network. Finally, results shows that the proposed contributions improve the performance of OpenMP applications on this kind of environments.
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Yunhui. "Agent behavior in peer-to-peer shared ride systems /." Connect to thesis, 2007. http://eprints.unimelb.edu.au/archive/00003214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Mohindra, Ajay. "Issues in the design of distributed shared memory systems." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/9123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Guowei Ph D. Massachusetts Institute of Technology. "Architectural support to exploit commutativity in shared-memory systems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106073.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 57-64).
Parallel systems are limited by the high costs of communication and synchronization. Exploiting commutativity has historically been a fruitful avenue to reduce traffic and serialization. This is because commutative operations produce the same final result regardless of the order they are performed in, and therefore can be processed concurrently and without communication. Unfortunately, software techniques that exploit commutativity, such as privatization and semantic locking, incur high runtime overheads. These overheads offset the benefit and thereby limit the applicability of software techniques. To avoid high overheads, it would be ideal to exploit commutativity in hardware. In fact, hardware already provides much of the functionality that is required to support commutativity For instance, private caches can buffer and coalesce multiple updates. However, current memory hierarchies can understand only reads and writes, which prevents hardware from recognizing and accelerating commutative operations. The key insight this thesis develops is that, with minor hardware modifications and minimal extra complexity, cache coherence protocols, the key component of communication and synchronization in shared-memory systems, can be extended to allow local and concurrent commutative operations. This thesis presents two techniques that leverage this insight to exploit commutativity in hardware. First, Coup provides architectural support for a limited number of single-instruction commutative updates, such as addition and bitwise logical operations. CouP allows multiple private caches to simultaneously hold update-only permission to the same cache line. Caches with update-only permission can locally buffer and coalesce updates to the line, but cannot satisfy read requests. Upon a read request, Coup reduces the partial updates buffered in private caches to produce the final value. Second, CoMMTM is a commutativity-aware hardware transactional memory (HTM) that supports an even broader range of multi-instruction, semantically commutative operations, such as set insertions and ordered puts. COMMTM extends the coherence protocol with a reducible state tagged with a user-defined label. Multiple caches can hold a given line in the reducible state with the same label, and transactions can implement arbitrary user-defined commutative operations through labeled loads and stores. These commutative operations proceed concurrently, without triggering conflicts or incurring any communication. A non-commutative operation (e.g., a conventional load or store) triggers a user-defined reduction that merges the different cache lines and may abort transactions with outstanding reducible updates. CouP and CoMMTM reduce communication and synchronization in many challenging parallel workloads. At 128 cores, CouP accelerates state-of-the-art implementations of update-heavy algorithms by up to 2.4x, and COMMTM outperforms a conventional eager-lazy HTM by up to 3.4x and reduces or eliminates wasted work due to transactional aborts.
by Guowei Zhang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
27

Garside, Jamie. "Real-time prefetching on shared-memory multi-core systems." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/10711/.

Full text
Abstract:
In recent years, there has been a growing trend towards using multi-core processors in real-time systems to cope with the rising computation requirements of real-time tasks. Coupled with this, the rising memory requirements of these tasks pushes demand beyond what can be provided by small, private on-chip caches, requiring the use of larger, slower off-chip memories such as DRAM. Due to the cost, power requirements and complexity of these memories, they are typically shared between all of the tasks within the system. In order for the execution time of these tasks to be bounded, the response time of the memory and the interference from other tasks also needs to be bounded. While there is a great amount of current research on bounding this interference, one popular method is to effectively partition the available memory bandwidth between the processors in the system. Of course, as the number of processors increases, so does the worst-case blocking, and worst-case blocking times quickly increase with the number of processors. It is difficult to further optimise the arbitration scheme; instead, this scaling problem needs to be approached from another angle. Prefetching has previously been shown to improve the execution time of tasks by speculatively issuing memory accesses ahead of time for items which may be useful in the near future, although these prefetchers are typically not used in real-time systems due to their unpredictable nature. Instead, this work presents a framework by which a prefetcher can be safely used alongside a composable memory arbiter, a predictable prefetching scheme, and finally a method by which this predictable prefetcher can be used to improve the worst-case execution time of a running task.
APA, Harvard, Vancouver, ISO, and other styles
28

Kaiser, Alexander. "Monotonicity in shared-memory program verification." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:1d16b4b5-524a-40db-b7bf-062374f8679c.

Full text
Abstract:
Predicate abstraction is a key enabling technology for applying model checkers to programs written in mainstream languages. It has been used very successfully for debugging sequential system-level C code. Although model checking was originally designed for analysing concurrent systems, there is little evidence of fruitful applications of predicate abstraction to shared-variable concurrent software. The goal of the present thesis is to close this gap. We propose an algorithmic solution implementing predicate abstraction that targets safety properties in non-recursive programs executed by an unbounded number of threads, which communicate via shared memory or higher-level mechanisms, such as mutexes and broadcasts. As system-level code makes frequent use of such primitives, their correct usage is critical to ensure reliability. Monotonicity - the property that thread actions remain executable when other threads are added to the current global state - is a natural and common feature of human-written concurrent software. It is also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety properties even when the number of running threads is unspecified. In this thesis, we show that the process of obtaining finite-data thread abstrac tions for model checking is not always compatible with monotonicity. Predicate-abstracting certain mainstream asynchronous software such as the ticket busy-wait lock algorithm results in non-monotone multi-threaded Boolean programs, despite the monotonicity of the input program: the monotonicity is lost in the abstraction. As a result, the unbounded thread Boolean programs do not give rise to well quasi-ordered systems [1], for which sound and complete safety checking algorithms are available. In fact, safety checking turns out to be undecidable for the obtained class of abstract programs, despite the finiteness of the individual threads’ state spaces. Our solution is to restore the monotonicity in the abstract program, using an inexpensive closure operator that precisely preserves all safety properties from the (non-monotone) abstract program without the closure. As a second contribution, we present a novel, sound and complete, yet empirically much improved algorithm for verifying abstractions, applicable to general well quasi-ordered systems. Our approach is to gradually widen the set of safety queries during the search by program states that involve fewer threads and are thus easier to decide, and are likely to finalise the decision on earlier queries. To counter the negative impact of "bad guesses", i.e. program states that turn out feasible, the search is supported by a parallel engine that generates such states; these are never selected for widening. We present an implementation of our techniques and extensive experiments on multi-threaded C programs, including device driver code from FreeBSD and Solaris. The experiments demonstrate that by exploiting monotonicity, model checking techniques - enabled by predicate abstraction - scale to realistic programs even of a few thousands of multi-threaded C code lines.
APA, Harvard, Vancouver, ISO, and other styles
29

Fitzsimons, Declan. "A psychodynamic perspective on the implementation of shared leaderships." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/7921.

Full text
Abstract:
A key debate within leadership research is whether leadership can be conceptualized as a specialized role occupied by individuals or as a shared influence process amongst all members of a group (Yukl, 2006). Since the mid-­‐ 1990s some leadership scholars, as a counterpoint to the dominance of the former and using terms such as shared and distributed leadership, have attempted to elaborate new ‘post-­‐heroic’ leadership models (Badaracco, 2001) of the latter, in which leadership is something that involves all group members. These new forms of leadership are often positioned as something that organizations can implement as part of an adaptive response to a rapidly changing world. Despite a 50-­‐year tradition of construing leadership as a group level construct, little attention has been paid in these emerging debates to the systems psychodynamic perspective. From this perspective there are grounds for suspecting that attempts to implement shared leadership may compound rather than ameliorate issues related to adaptive challenges (Huffington, James and Armstrong, 2004). This thesis engages with the shared and distributed leadership literatures and examines how a systems psychodynamic perspective can contribute not only to debates within these literatures but to the wider controversies in the leadership literature. This thesis reports on the findings of a single, 18-­‐month, longitudinal case study of a senior team whose managing director attempted to implement shared leadership. Using a clinical fieldwork methodology (Schein, 1987) in the systems psychodynamic tradition (Miller, 1993b; Miller and Rice, 1967), this study advances a number of contributions to theory. These include: findings that challenge existing approaches to conceptualizing leadership – shared or otherwise; the elucidation of complex unconscious team processes that are mobilized as a senior team undertakes adaptive work; and thirdly, a more sophisticated and theoretically robust conceptualization of leadership as a group level phenomenon.
APA, Harvard, Vancouver, ISO, and other styles
30

Gull, Aarron. "Cherub : a hardware distributed single shared address space memory architecture." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kong, Jiantao. "Trusted data path protecting shared data in virtualized distributed systems." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33820.

Full text
Abstract:
When sharing data across multiple sites, service applications should not be trusted automatically. Services that are suspected of faulty, erroneous, or malicious behaviors, or that run on systems that may be compromised, should not be able to gain access to protected data or entrusted with the same data access rights as others. This thesis proposes a context flow model that controls the information flow in a distributed system. Each service application along with its surrounding context in a distributed system is treated as a controllable principal. This thesis defines a trust-based access control model that controls the information exchange between these principals. An online monitoring framework is used to evaluate the trustworthiness of the service applications and the underlining systems. An external communication interception runtime framework enforces trust-based access control transparently for the entire system.
APA, Harvard, Vancouver, ISO, and other styles
32

Farook, Mohammad. "Fast lock-free linked lists in distributed shared memory systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ32107.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lo, Adley Kam Wing. "Tolerating latency in software distributed shared memory systems through multithreading." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ34040.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jaseemuddin, Muhammad. "Bidirectional ring, an interconnection network for shared-memory multiprocessor systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/NQ41554.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Märtens, Holger. "Options in Scan Processing for Shared-Disk Parallel Database Systems." Universität Leipzig, 1998. https://ul.qucosa.de/id/qucosa%3A34523.

Full text
Abstract:
Shared-disk database systems offer a high degree of freedom in the allocation of workload compared to shared-nothing architectures. This creates a great potential for load balancing but also introduces additional complexity into the process of query scheduling. This report surveys the problems and opportunities faced in scan processing in a shared-disk environment. We list the parameters to tune and the decisions to make, as well as some known solutions and commonsense considerations, in order to identify the most promising areas of future research.
APA, Harvard, Vancouver, ISO, and other styles
36

Dai, Donglai. "Designing efficient communication subsystems for distributed shared memory (DSM) systems /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488186329503887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ausavarungnirun, Rachata. "Techniques for Shared Resource Management in Systems with Throughput Processors." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/905.

Full text
Abstract:
The continued growth of the computational capability of throughput processors has made throughput processors the platform of choice for a wide variety of high performance computing applications. Graphics Processing Units (GPUs) are a prime example of throughput processors that can deliver high performance for applications ranging from typical graphics applications to general-purpose data parallel (GPGPU) applications. However, this success has been accompa- nied by new performance bottlenecks throughout the memory hierarchy of GPU-based systems. This dissertation identifies and eliminates performance bottlenecks caused by major sources of interference throughout the memory hierarchy. Specifically, we provide an in-depth analysis of inter- and intra-application as well as inter- address-space interference that significantly degrade the performance and efficiency of GPU-based systems. To minimize such interference, we introduce changes to the memory hierarchy for systems with GPUs that allow the memory hierarchy to be aware of both CPU and GPU applications’ charac- teristics. We introduce mechanisms to dynamically analyze different applications’ characteristics and propose four major changes throughout the memory hierarchy. First, we introduce Memory Divergence Correction (MeDiC), a cache management mecha- nism that mitigates intra-application interference in GPGPU applications by allowing the shared L2 cache and the memory controller to be aware of the GPU’s warp-level memory divergence characteristics. MeDiC uses this warp-level memory divergence information to give more cache space and more memory bandwidth to warps that benefit most from utilizing such resources. Our evaluations show that MeDiC significantly outperforms multiple state-of-the-art caching policies proposed for GPUs. Second, we introduce the Staged Memory Scheduler (SMS), an application-aware CPU-GPU memory request scheduler that mitigates inter-application interference in heterogeneous CPU-GPU systems. SMS creates a fundamentally new approach to memory controller design that decouples the memory controller into three significantly simpler structures, each of which has a separate task, These structures operate together to greatly improve both system performance and fairness. Our three-stage memory controller first groups requests based on row-buffer locality. This grouping allows the second stage to focus on inter-application scheduling decisions. These two stages en- force high-level policies regarding performance and fairness. As a result, the last stage is simple logic that deals only with the low-level DRAM commands and timing. SMS is also configurable: it allows the system software to trade off between the quality of service provided to the CPU versus GPU applications. Our evaluations show that SMS not only reduces inter-application interference caused by the GPU, thereby improving heterogeneous system performance, but also provides better scalability and power efficiency compared to multiple state-of-the-art memory schedulers. Third, we redesign the GPU memory management unit to efficiently handle new problems caused by the massive address translation parallelism present in GPU computation units in multi- GPU-application environments. Running multiple GPGPU applications concurrently induces significant inter-core thrashing on the shared address translation/protection units; e.g., the shared Translation Lookaside Buffer (TLB), a new phenomenon that we call inter-address-space interference. To reduce this interference, we introduce Multi Address Space Concurrent Kernels (MASK). MASK introduces TLB-awareness throughout the GPU memory hierarchy and introduces TLBand cache-bypassing techniques to increase the effectiveness of a shared TLB. Finally, we introduce Mosaic, a hardware-software cooperative technique that further increases the effectiveness of TLB by modifying the memory allocation policy in the system software. Mosaic introduces a high-throughput method to support large pages in multi-GPU-application environments. The key idea is to ensure memory allocation preserve address space contiguity to allow pages to be coalesced without any data movements. Our evaluations show that the MASK-Mosaic combination provides a simple mechanism that eliminates the performance overhead of address translation in GPUs without significant changes to GPU hardware, thereby greatly improving GPU system performance. The key conclusion of this dissertation is that a combination of GPU-aware cache and memory management techniques can effectively mitigate the memory interference on current and future GPU-based systems as well as other types of throughput processors.
APA, Harvard, Vancouver, ISO, and other styles
38

Lai, Bo-Cheng. "Cross-layer co-design of shared memory multi-core systems." Diss., Restricted to subscribing institutions, 2007. http://proquest.umi.com/pqdweb?did=1324381171&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Mortensen, Christian. "Implementation of Data Parallel Primitives on MIMD Shared Memory Systems." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44165.

Full text
Abstract:
This thesis presents an implementation of a multi-threaded C library for performing data parallel computations on MIMD shared memory systems, with support for user defined operators and one-dimensional sparse arrays. Multi-threaded parallel execution was achieved by the use of the POSIX threads, and the library exposes several functions for performing data parallel computations directly on arrays. The implemented functions were based on a set of primitives that many data parallel programming languages have in common. The individual scalability of the primitives varied greatly, with most of them only gaining a significant speedup when executed on two cores followed by a significant drop-off in speedup as more cores were added. An exception to this was the reduction primitive however, which managed to achieve near optimal speedup in most tests. The library proved unviable for expressing algorithms requiring more then one or two primitives in sequence due to the overhead that each of them cause.
APA, Harvard, Vancouver, ISO, and other styles
40

Syed, Moiz Masood. "Shared Solar Generation and Battery Storage Systems in Residential Microgrids." Thesis, Curtin University, 2021. http://hdl.handle.net/20.500.11937/86206.

Full text
Abstract:
This thesis presents investigation on performance analysis, technical benefits and challenges on deployment of shared energy microgrid in an Australian apartment precinct at WGV. Performance analysis was conducted to examine load profiles of apartment units and impact of renewable installation was analysed in terms of grid reliance reduction. The shared configurations implemented in three different apartment complexes and analysed dataset from these systems and buildings presents a significant contribution to the research.
APA, Harvard, Vancouver, ISO, and other styles
41

Peña, Alcaraz Maite. "Analysis of capacity pricing and allocation mechanisms in shared railway systems." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99564.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Engineering Systems Division, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 187-194).
In the last 15 years, the use of rail infrastructure by different train operating companies (shared railway system) has been proposed as a way to improve infrastructure utilization and to increase efficiency in the railway industry. Shared use requires coordination between the infrastructure manager and multiple train operators. Such coordination requires capacity planning mechanisms that determine which trains can access the infrastructure at each time, capacity allocation, and the access charges they have to pay, capacity pricing. The objective of this thesis is to contribute to the field of shared railway systems coordination by 1) developing a framework to analyze the performance of shared railway systems under alternative capacity pricing and allocation mechanisms, and 2) using this framework to understand the implications of representative capacity pricing and allocation mechanisms in representative shared railway systems. There are strong interactions between capacity planning and infrastructure operations in the railway industry; the operations on the infrastructure determine the available capacity in the system. As a consequence, the framework developed in this thesis to evaluate the performance of shared railway systems under alternative capacity pricing and allocation consists of two models: 1) a train operator model and 2) an infrastructure manager model. The train operator model is a financial model that anticipates how train operators would respond to the capacity pricing and allocation mechanisms and determine their demand for infrastructure use. The infrastructure manager model is a network optimization model that determines the optimal train timetable (infrastructure manager's decisions) that accommodates the train operators' demands for scheduling trains, considering the topology of the system, safety constraints, and other technical aspects of the infrastructure for shared railway systems. To be able to solve the train timetabling optimization problem in meaningful instances, this thesis develops a novel approximate dynamic programming algorithm based on linear programming that extends previous algorithms proposed in the literature to effectively solve large network optimization problems. This thesis then uses the train operator model to compare the operational decisions of train operators in shared railway systems with the operational decisions of even-handed integrated railway companies. We show that train operators in shared railway system access charges reflect variable infrastructure manager's costs to operate trains on the infrastructure. We also identify two cases in which the train operators may have incentives to deviate from the integrated railway systems' operational decisions: 1) when the infrastructure manager needs to recover part of the infrastructure management fixed costs, or 2) when the railway system is congested. This motivates the choice of the two case studies of this thesis, one based on the Central Corridor in Tanzania, and the other one based on the Northeast Corridor in the US. We then show how to use the framework proposed in this thesis to analyze the trade-offs associated with the use of alternative mechanisms in these two cases. To our knowledge, this is the first effort to compare alternative mechanisms to price and allocate capacity in the same shared railway system. The results of this thesis show that there are important trade-offs associated with each mechanism and none of them is superior to the other on all dimensions. We thus recommend that system stakeholders carefully analyze the implications of alternative capacity pricing and allocation mechanisms before locking the system into one of them. This is particularly important today since several countries are currently restructuring their railway sector to allow shared use. We claim that the improved understanding of the system performance gained with the framework proposed in this thesis is important to be able to design adequate capacity pricing and allocation mechanisms that can mitigate the coordination problems of shared railway systems while maintaining the benefits of shared infrastructure in the railway industry.
by Maite (Maria Teresa) Pena-Alcaraz.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Tappan, Jacqueline M. (Jacqueline Marie). "Designing an error resolution checklist for a shared manned-unmanned environment." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62776.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 151-153).
The role of unmanned vehicles in military and commercial environments continues to expand, resulting in Shared Manned-Unmanned (SMU) domains. While the introduction of unmanned vehicles can have many benefits, humans operating within these environments must shift to high-level supervisory roles, which will require them to resolve system errors. Error resolution in current Human Supervisory Control (HSC) domains is performed using a checklist; the error is quickly identified, and then resolved using the steps outlined by the checklist. Background research into error resolution identified three attributes that impact the effectiveness of an error resolution checklist: domain predictability, sensor reliability, and time availability. These attributes were combined into a Checklist Attribute Model (CAM), demonstrating that HSC domains with high levels of complexity (e.g. SMU domains) are ill-suited to error resolution using traditional checklists. In particular, it was found that more support was required during such error identification, as data is uncertain and unreliable. A new error resolution checklist, termed the GUIDER (Graphical User Interface for Directed Error Recovery) Probabilistic Checklist, was developed to aid the human during the error identification process in SMU domains. Evaluation was performed through a human performance experiment requiring participants to resolve errors in a simulated SMU domain using the GUIDER Probabilistic Checklist and a traditional checklist tool. Thirty-six participants were recruited, and each was assigned to a single checklist tool condition. Participants completed three simulated error scenarios. The three scenarios had varying sensor reliability levels (low, medium, high) to gauge the impact of uncertainty on the usefulness of each checklist tool. The human performance experiment showed that the addition of error likelihood data using an intuitive visualization through the GUIDER Probabilistic Checklist improved error resolution in uncertain settings. In settings with high certainty, there was no difference found between the performances of the two checklists. While positive, further testing is required in more realistic settings to validate both the effectiveness of the GUIDER Probabilistic Checklist tool and the Checklist Attribute Model.
by Jacqueline M. Tappan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
43

Avnet, Mark Sean. "Socio-cognitive analysis of engineering systems design : shared knowledge, process, and product." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52782.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 213-222).
This research is based on the well-known but seldom stated premise that the design of complex engineered systems is done by people -- each with their own knowledge, thoughts, and views about the system being designed. To understand the implications of this social dimension, the Integrated Concurrent Engineering (ICE) environment, a real-world setting for conceptual space mission design, is examined from technical and social perspectives. An integrated analysis demonstrates a relationship among shared knowledge, process, and product. The design process is analyzed using a parameter-based Design Structure Matrix (DSM). This model, consisting of 682 dependencies among 172 parameters, is partitioned (reordered) to reveal a tightly coupled design process. Further analysis shows that making starting assumptions about design budgets leads to a straightforward process of well-defined and sequentially executed design iterations. To analyze the social aspects, a network-based model of shared knowledge is proposed. By quantifying team members' common views of design drivers, a network of shared mental models is built to reveal the structure of shared knowledge at a snapshot in time. A structural comparison of pre-session and post-session networks is used to compute a metric of change in shared knowledge. Based on survey data from 12 design sessions, a correlation is found between change in shared knowledge and each of several system attributes, including technological maturity, development time, mass, and cost. Integrated analysis of design process and shared knowledge yields three interdisciplinary insights.
(cont.) First, certain features of the system serve a central role both in the design process and in the development of shared knowledge. Second, change in shared knowledge is related to the design product. Finally, change in shared knowledge and team coordination (agreement between expected and reported interactions) are positively correlated. The thesis contributes to the literature on product development, human factors engineering, and organizational and social psychology. It proposes a rigorous means of incorporating the socio cognitive aspects of design into the practice of systems engineering. Finally, the thesis offers a set of recommendations for the formation and management of ICE design facilities and discusses the applicability of the proposed methodology to the full-scale development of complex engineered systems.
by Mark Sean Avnet.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Chan, Serena 1977. "Architectures for a space-based information network with shared on-orbit processing." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30353.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2005.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 335-343).
This dissertation provides a top level assessment of technology design choices for the architecture of a space-based information network with shared on-orbit processing. Networking is an efficient method of sharing communications and lowering the cost of communications, providing better interoperability and data integration for multiple satellites. The current space communications architecture sets a critical limitation on the collection of raw data sent to the ground. By introducing powerful space-borne processing, compression of raw data can alleviate the need for expensive and expansive downlinks. Moreover, distribution of processed data directly from space sensors to the end-users may be more easily realized. A space-based information network backbone can act as the transport network for mission satellites as well as enable the concept of decoupled, shared, and perhaps distributed space-borne processing for space-based assets. Optical crosslinks are the enabling technology for creating a cost-effective network capable of supporting high data rates. In this dissertation, the space-based network backbone is designed to meet a number of mission requirements by optimizing over constellation topologies under different traffic models. With high network capacity availability, space-borne processing can be accessible by any mission satellite attached to the network. Space-borne processing capabilities can be enhanced with commercial processors that are tolerant of radiation and replenished periodically (as frequently as every two years).
(cont.) Additionally, innovative ways of using a space-based information network can revolutionize satellite communications and space missions. Applications include distributed computing in space, interoperable space communications, multiplatform distributed satellite communications, coherent distributed space sensing, multisensor data fusion, and restoration of disconnected global terrestrial networks after a disaster. Lastly, the consolidation of all the different communications assets into a horizontally integrated space-based network infrastructure calls for a space-based network backbone to be designed with a generic nature. A coherent infrastructure can satisfy the goals of interoperability, flexibility, scalability, and allows the system to be evolutionary. This transformational vision of a generic space-based information network allows for growth to accommodate civilian demands, lowers the price of entry for the commercial sector, and makes way for innovation to enhance and provide additional value to military systems.
by Serena Chan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
45

Sun, Jiawen. "The GraphGrind framework : fast graph analytics on large shared-memory systems." Thesis, Queen's University Belfast, 2018. https://pure.qub.ac.uk/portal/en/theses/the-graphgrind-framework-fast-graph-analytics-on-large-sharedmemory-systems(e1eb006f-3a68-4d05-91fe-961d04b42694).html.

Full text
Abstract:
As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs. This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework. At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets. Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets. However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer. Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.
APA, Harvard, Vancouver, ISO, and other styles
46

Faltemier, Timothy Collin. "A groupware interface to a shared file system." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1352.

Full text
Abstract:
Current shared file systems (NFS and SAMBA) are based on the local area network model. To these file systems, performance is the major issue. However, as the Internet grows, so does the distance between users and the Local Area Network. With this increase in distance, the latency increases as well. This creates a problem when multiple users attempt to work in a shared environment. Traditionally, the only way to collaborate over the Internet required the use of locks. These requirements motivated the creation of the State Difference Transformation algorithm that allows users non-blocking and unconstrained interaction across the Internet on a tree based structure. Fine Grain Locking, on the other hand, allows a user the ability to set a lock on a character or range of characters while using a form of the transformation algorithm listed above. This thesis proposes an implementation that integrates these two technologies as well as demonstrating the effectiveness and flexibility of State Difference Transformation. The implementation includes two applications that can be used to further research in both the transformation and locking communities. The first application allows users to create tests for SDT and Fine Grain Locking and verify the correctness of the algorithms in any given situation. The second application then furthers this research by creating a real-world groupware interface to a shared file system based on a clientserver architecture. This implementation demonstrates the usability and robustness of these algorithms in real world situations.
APA, Harvard, Vancouver, ISO, and other styles
47

Johansson, Oscar, and Max Forsman. "Shared computer systems and groupware development : Escaping the personal computer paradigm." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75953.

Full text
Abstract:
For the majority of the computers existence, we humans have interacted with them in a similar way, usually with a strict one-to-one relationship between user and machine. This is reflected by the design of most computers, operating systems and user applications on the market today, which are typically intended to only be operated by a single user. When computers are used for teamwork and cooperation, this design philosophy can be restricting and problematic. This paper investigates the development of shared software intended for multiple users and the impact of the single user bias in this context. A prototype software system was developed in order to evaluate different development methods for shared applications and discover potential challenges and limitations with this kind of software. It was found that the development of applications for multiple users can be severely limited by the target operating system and hardware platform. The authors conclude that new platforms are required to develop shared software more efficiently. These platforms should be tailored to provide robust support for multiple concurrent users. This work was carried out together with SAAB Air Traffic Management in Växjö, Sweden and is a bachelor's thesis in computer engineering at Linnaeus University.
APA, Harvard, Vancouver, ISO, and other styles
48

Prytz, Erik. "Performance and Shared Understanding in Mixed C2-Systems." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-56430.

Full text
Abstract:

OBJECTIVE: This thesis had two purposes. The main one was to examine how mixed conditions affect a Command & Control (C2) system, particularly in terms of shared understanding, situation awareness (SA), performance and workload. Mixed conditions refer here to when subsystems of a larger C2-system differ in terms of capabilities, particularly those capabilities influencing the understanding of a situation e.g. sensors or communication, which could affect the C2-capabilities when working toward a common goal. The second purpose of this thesis was to investigate a newly developed tool for measuring shared understanding, Shared Priorities, in terms of validity and usefulness.

METHOD: A number of hypotheses were constructed and investigated by a controlled experiment using a microworld, C3Fire, where two-man teams fought a simulated forest fire. The independent variable manipulated was the type of support system used. One condition used one computer interface per participant, the second was mixed conditions where one participant used the computer interface and one used a paper map, and the last condition was both participants using a paper map. Different questionnaires developed to measure SA, workload etc. was used to measure the dependent variables.

RESULTS: The statistical analysis performed on the collected data showed that the performance and SA was comparatively better when both participants used the computer interface than the mixed condition, which in turn was better than when both participants used a paper map. For workload and teamwork, no differences between the mixed condition and the dual map condition were found. As for the Shared Priorities measurement, no differences were found between any of the conditions.

CONCLUSION: A C2-system in which some additional capabilities are introduced for some but not all subsystems may not benefit in some regards, e.g. workload and teamwork, but could improve in others, e.g. SA and performance. A Structural Equation Model (SEM) shows that the theoretical constructs of SA, workload, teamwork and performance are related and affect each other, so that the workload of the system negatively affects the teamwork and SA, while the teamwork may affect SA positively and a high SA enables high performance. 

APA, Harvard, Vancouver, ISO, and other styles
49

Heitmann, Carsten Andreas. "DISCOlab: a toolkit for development of shared display systems in UbiCollab." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9179.

Full text
Abstract:

Shared displays are important tools for promoting collaboration.Ubiquitous computing presents new requirements for the design of shared display systems.Contextualisation of information at shared displays is becoming more important. The ability to rapidly create shared display systems is motivated by the fact that shared displays play central roles in collaboration. Low level implementation issues, common to shared display systems can be an obstacle for this. A toolkit for creation of such systems is therefore needed to provide basic shared display functionality to developers. This master thesis presents a toolkit for creating shared display applications on UbiCollab, a platform supporting collaborative work in ubiquitous environments. The work shows the development of the toolkit and how the toolkit can be used to create a shared display system. The toolkit takes advantage of the opportunities the UbiCollab platform provides on contextualisation of information.

APA, Harvard, Vancouver, ISO, and other styles
50

Mikael, Silvén. "Evaluation and Comparison of Programming Frameworks for Shared Memory Multicore Systems." Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-109674.

Full text
Abstract:
In this masters thesis we explore past work trying to classify algorithmic problems. These classications are used to select problems for evaluating dierent programming languages and frameworks. We use a subset of the 13 Dwarfs of Berkeley, more specically: dense and sparse linear algebra, spectral methods, graph traversal and MapReduce. We evaluate the performance and ease of use of two programming frameworks and two languages of interest to Etteplan, a large consultant company; C++ using either OpenMP or MPI and Erlang. We show that MPI can be used to speed up programs in a shared memory system, it handles structured problems well but struggles with more dynamic work loads such as the graph traversal algorithm BFS that we use as a benchmark. Additionally we show that OpenMP is an easy way to gain performance by utilizing thread level parallelism in most situations. Erlang, the concurrency focused programming language from Ericsson is explored as an alternative to C++. However its immutable tree data structures do not compete with ecient in-place updates of memory arrays with constant access time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography