Dissertationen zum Thema „Large-scale infrastructures“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Large-scale infrastructures.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-49 Dissertationen für die Forschung zum Thema "Large-scale infrastructures" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Capizzi, Sirio <1980&gt. „A tuple space implementation for large-scale infrastructures“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/914/1/Tesi_Capizzi_Sirio.pdf.

Der volle Inhalt der Quelle
Annotation:
Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Capizzi, Sirio <1980&gt. „A tuple space implementation for large-scale infrastructures“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/914/.

Der volle Inhalt der Quelle
Annotation:
Coordinating activities in a distributed system is an open research topic. Several models have been proposed to achieve this purpose such as message passing, publish/subscribe, workflows or tuple spaces. We have focused on the latter model, trying to overcome some of its disadvantages. In particular we have applied spatial database techniques to tuple spaces in order to increase their performance when handling a large number of tuples. Moreover, we have studied how structured peer to peer approaches can be applied to better distribute tuples on large networks. Using some of these result, we have developed a tuple space implementation for the Globus Toolkit that can be used by Grid applications as a coordination service. The development of such a service has been quite challenging due to the limitations imposed by XML serialization that have heavily influenced its design. Nevertheless, we were able to complete its implementation and use it to implement two different types of test applications: a completely parallelizable one and a plasma simulation that is not completely parallelizable. Using this last application we have compared the performance of our service against MPI. Finally, we have developed and tested a simple workflow in order to show the versatility of our service.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gattoni, Gaia. „Analysis of the infrastructures to build immersive visit at large scale“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Den vollen Inhalt der Quelle finden
Annotation:
This thesis aims to introduce some relevant notion to demonstrate how digital innovation may benefit all phases of the development of a construction project. It has proven possible, through the use of the BIM technique, to optimize the design, construction, and administration phases of structures. With the aid of virtual reality, it is feasible to reproduce a complete immersion experience of the structure during the design phase. The two scenarios illustrated in this thesis need to be considered as two different approaches to technological innovation. From LaVallée project, the first scenario, it can be stated that the BIM methodology applied in this context and then expanded to the concept of CIM is essential for the district's construction. The purpose is to predict and describe the quality of the environment and urban spaces in a project situation and to validate the results obtained. In order to do this, it is necessary to create an immersive visit with 3D modeling of the LaVallée area using BIM data, where these data are collected from different project partners in IFC format. With all of the information I gained from this study, I was able to employ the abilities to a different scenario: the Rimini port. The goal of this final part, is to reconstruct a three-dimensional visualization starting from a very basic level of information, which means looking for methods and tools that can easily represent a virtual visit through the use of 2D data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Moise, Diana Maria. „Optimizing data management for MapReduce applications on large-scale distributed infrastructures“. Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0067/document.

Der volle Inhalt der Quelle
Annotation:
Les applications data-intensive sont largement utilisées au sein de domaines diverses dans le but d'extraire et de traiter des informations, de concevoir des systèmes complexes, d'effectuer des simulations de modèles réels, etc. Ces applications posent des défis complexes tant en termes de stockage que de calcul. Dans le contexte des applications data-intensive, nous nous concentrons sur le paradigme MapReduce et ses mises en oeuvre. Introduite par Google, l'abstraction MapReduce a révolutionné la communauté intensif de données et s'est rapidement étendue à diverses domaines de recherche et de production. Une implémentation domaine publique de l'abstraction mise en avant par Google, a été fournie par Yahoo à travers du project Hadoop. Le framework Hadoop est considéré l'implémentation de référence de MapReduce et est actuellement largement utilisé à des fins diverses et sur plusieurs infrastructures. Nous proposons un système de fichiers distribué, optimisé pour des accès hautement concurrents, qui puisse servir comme couche de stockage pour des applications MapReduce. Nous avons conçu le BlobSeer File System (BSFS), basé sur BlobSeer, un service de stockage distribué, hautement efficace, facilitant le partage de données à grande échelle. Nous étudions également plusieurs aspects liés à la gestion des données intermédiaires dans des environnements MapReduce. Nous explorons les contraintes des données intermédiaires MapReduce à deux niveaux: dans le même job MapReduce et pendant l'exécution des pipelines d'applications MapReduce. Enfin, nous proposons des extensions de Hadoop, un environnement MapReduce populaire et open-source, comme par example le support de l'opération append. Ce travail inclut également l'évaluation et les résultats obtenus sur des infrastructures à grande échelle: grilles informatiques et clouds
Data-intensive applications are nowadays, widely used in various domains to extract and process information, to design complex systems, to perform simulations of real models, etc. These applications exhibit challenging requirements in terms of both storage and computation. Specialized abstractions like Google’s MapReduce were developed to efficiently manage the workloads of data-intensive applications. The MapReduce abstraction has revolutionized the data-intensive community and has rapidly spread to various research and production areas. An open-source implementation of Google's abstraction was provided by Yahoo! through the Hadoop project. This framework is considered the reference MapReduce implementation and is currently heavily used for various purposes and on several infrastructures. To achieve high-performance MapReduce processing, we propose a concurrency-optimized file system for MapReduce Frameworks. As a starting point, we rely on BlobSeer, a framework that was designed as a solution to the challenge of efficiently storing data generated by data-intensive applications running at large scales. We have built the BlobSeer File System (BSFS), with the goal of providing high throughput under heavy concurrency to MapReduce applications. We also study several aspects related to intermediate data management in MapReduce frameworks. We investigate the requirements of MapReduce intermediate data at two levels: inside the same job, and during the execution of pipeline applications. Finally, we show how BSFS can enable extensions to the de facto MapReduce implementation, Hadoop, such as the support for the append operation. This work also comprises the evaluation and the obtained results in the context of grid and cloud environments
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Tsafack, Chetsa Ghislain Landry. „System Profiling and Green Capabilities for Large Scale and Distributed Infrastructures“. Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2013. http://tel.archives-ouvertes.fr/tel-00946583.

Der volle Inhalt der Quelle
Annotation:
Nowadays, reducing the energy consumption of large scale and distributed infrastructures has truly become a challenge for both industry and academia. This is corroborated by the many efforts aiming to reduce the energy consumption of those systems. Initiatives for reducing the energy consumption of large scale and distributed infrastructures can without loss of generality be broken into hardware and software initiatives.Unlike their hardware counterpart, software solutions to the energy reduction problem in large scale and distributed infrastructures hardly result in real deployments. At the one hand, this can be justified by the fact that they are application oriented. At the other hand, their failure can be attributed to their complex nature which often requires vast technical knowledge behind proposed solutions and/or thorough understanding of applications at hand. This restricts their use to a limited number of experts, because users usually lack adequate skills. In addition, although subsystems including the memory are becoming more and more power hungry, current software energy reduction techniques fail to take them into account. This thesis proposes a methodology for reducing the energy consumption of large scale and distributed infrastructures. Broken into three steps known as (i) phase identification, (ii) phase characterization, and (iii) phase identification and system reconfiguration; our methodology abstracts away from any individual applications as it focuses on the infrastructure, which it analyses the runtime behaviour and takes reconfiguration decisions accordingly.The proposed methodology is implemented and evaluated in high performance computing (HPC) clusters of varied sizes through a Multi-Resource Energy Efficient Framework (MREEF). MREEF implements the proposed energy reduction methodology so as to leave users with the choice of implementing their own system reconfiguration decisions depending on their needs. Experimental results show that our methodology reduces the energy consumption of the overall infrastructure of up to 24% with less than 7% performance degradation. By taking into account all subsystems, our experiments demonstrate that the energy reduction problem in large scale and distributed infrastructures can benefit from more than "the traditional" processor frequency scaling. Experiments in clusters of varied sizes demonstrate that MREEF and therefore our methodology can easily be extended to a large number of energy aware clusters. The extension of MREEF to virtualized environments like cloud shows that the proposed methodology goes beyond HPC systems and can be used in many other computing environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rais, Issam. „Discover, model and combine energy leverages for large scale energy efficient infrastructures“. Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN051/document.

Der volle Inhalt der Quelle
Annotation:
La consommation énergétique de nos entités de calculs à grande échelle est une problématique de plus en plus inquiétante. Il est d'autant plus inquiétant que nous nous dirigeons vers "L'exascale",machine qui calcule 10^18 opérations flottantes par secondes, soit 10 fois plus que les meilleurs machines publiques actuelles. En 2017, les data-center consommaient 7% de la demande globale et étaient responsable de 2% de l’émission globale de CO2. Avec la multiplication actuelle du nombre d'outils connectés par personne, réduire la consommation énergétique des data-centers et supercalculateurs à grande échelle est une problématique cruciale pour construire une société numérique durable.Il est donc urgent de voir la consommation énergétique comme une problématique phare de cescentres. De nombreuses techniques, ici nommé "levier", ont été développées dans le but de réduire la consommation électrique des centres de calculs, à différents niveaux : infrastructure, matériel, intergiciel et applicatif. Bien utiliser ces leviers est donc capitale pour s'approcher de l'efficience énergétique. Un grand nombre de leviers sont disponibles dans ces centres de calculs. Malgré leurs gains potentiels, il peut être compliqué de bien les utiliser mais aussi d'en combiner plusieurs en restant efficace en énergie.Dans cette thèse, nous avons abordé la découverte, compréhension et usage intelligent des leviers disponibles à grande échelle dans ces centres de calculs. Nous avons étudié des leviers de manière indépendante, puis les avons combinés à d'autres leviers afin de proposer une solution générique et dynamique à l'usage combiné des leviers
Energy consumption is a growing concern on the verge of Exascale computing, a machine reaching 10^18 operations per seconds, 10 times the actual best public supercomputers, it became a crucial focus. Data centers consumed about 7% of total demand of electricity and are responsible of 2% of global carbon emission. With the multiplication of connected devices per person around the world, reducing the energy consumption of large scale computing system is a mandatory step to address in order to build a sustainable digital society.Several techniques, that we call leverage, have been developed in order to lower the electricalconsumption of computing facilities. To face this growing concern many solutions have beendeveloped at multiple levels of computing facilities: infrastructure, hardware, middle-ware, andapplication.It is urgent to embrace energy efficiency as a major concern of our modern computing facilities. Using these leverages is mandatory to better energy efficiency. A lot of leverages are available on large scale computing center. In spite of their potential gains, users and administrators don't fully use them or don't use them at all to better energy efficiency. Although, using these techniques, alone and combined, could be complicated and counter productive if not wisely used.This thesis defines and investigates the discovery, understanding and smart usage of leverages available on a large scale data center or supercomputer. We focus on various single leverages and understand them. We then combine them to other leverages and propose a generic solution to the dynamic usage of combined leverages
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

KAMMOUH, OMAR. „Resilience assessment of Physical infrastructures and social systems of large scale communities“. Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2735173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Braun, Johannes [Verfasser], Johannes [Akademischer Betreuer] Buchmann und Max [Akademischer Betreuer] Mühlhäuser. „Maintaining Security and Trust in Large Scale Public Key Infrastructures / Johannes Braun. Betreuer: Johannes Buchmann ; Max Mühlhäuser“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2015. http://d-nb.info/1111113351/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Esteves, José Jurandir Alves. „Optimization of network slice placement in distributed large-scale infrastructures : from heuristics to controlled deep reinforcement learning“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS325.

Der volle Inhalt der Quelle
Annotation:
Cette thèse examine comment optimiser le placement de tranches (slices) de réseau dans les infrastructures distribuées à grande échelle en se concentrant sur des approches heuristiques en ligne et basées sur l'apprentissage par renforcement profond (DRL). Tout d'abord, nous nous appuyons sur la programmation linéaire en nombre entiers (ILP) pour proposer un modèle de données permettant le placement de tranches de réseau sur le bord et le cœur du réseau. Contrairement à la plupart des études relatives au placement de fonctions réseau virtualisées, le modèle ILP proposé prend en compte les topologies complexes des tranches de réseau et accorde une attention particulière à l'emplacement géographique des utilisateurs des tranches réseau et à son impact sur le calcul de la latence de bout en bout. Des expérimentations numériques nous ont permis de montrer la pertinence de la prise en compte des contraintes de localisation des utilisateurs.Ensuite, nous nous appuyons sur une approche appelée "Power of Two Choices" pour proposer un algorithme heuristique en ligne qui est adapté à supporter le placement sur des infrastructures distribuées à grande échelle tout en intégrant des contraintes spécifiques au bord du réseau. Les résultats de l'évaluation montrent la bonne performance de l'heuristique qui résout le problème en quelques secondes dans un scénario à grande échelle. L'heuristique améliore également le taux d'acceptation des demandes de placement de tranches de réseau par rapport à une solution déterministe en ligne en utilisant l'ILP.Enfin, nous étudions l'utilisation de méthodes de ML, et plus particulièrement de DRL, pour améliorer l'extensibilité et l'automatisation du placement de tranches réseau en considérant une version multi-objectif du problème. Nous proposons d'abord un algorithme DRL pour le placement de tranches réseau qui s'appuie sur l'algorithme "Advantage Actor Critic" pour un apprentissage rapide, et sur les réseaux convolutionels de graphes pour l'extraction de propriétés. Ensuite, nous proposons une approche que nous appelons "Heuristically Assisted DRL" (HA-DRL), qui utilise des heuristiques pour contrôler l'apprentissage et l'exécution de l'agent DRL. Nous évaluons cette solution par des simulations dans des conditions de charge de réseau stationnaire, ensuite cyclique et enfin non-stationnaire. Les résultats de l'évaluation montrent que le contrôle par heuristique est un moyen efficace d'accélérer le processus d'apprentissage du DRL, et permet d'obtenir un gain substantiel dans l'utilisation des ressources, de réduire la dégradation des performances et d'être plus fiable en cas de changements imprévisibles de la charge du réseau que les algorithmes DRL non contrôlés
This PhD thesis investigates how to optimize Network Slice Placement in distributed large-scale infrastructures focusing on online heuristic and Deep Reinforcement Learning (DRL) based approaches. First, we rely on Integer Linear Programming (ILP) to propose a data model for enabling on-Edge and on-Network Slice Placement. In contrary to most studies related to placement in the NFV context, the proposed ILP model considers complex Network Slice topologies and pays special attention to the geographic location of Network Slice Users and its impact on the End-to-End (E2E) latency. Extensive numerical experiments show the relevance of taking into account the user location constraints. Then, we rely on an approach called the “Power of Two Choices"(P2C) to propose an online heuristic algorithm for the problem which is adapted to support placement on large-scale distributed infrastructures while integrating Edge-specific constraints. The evaluation results show the good performance of the heuristic that solves the problem in few seconds under a large-scale scenario. The heuristic also improves the acceptance ratio of Network Slice Placement Requests when compared against a deterministic online ILP-based solution. Finally, we investigate the use of ML methods, more specifically DRL, for increasing scalability and automation of Network Slice Placement considering a multi-objective optimization approach to the problem. We first propose a DRL algorithm for Network Slice Placement which relies on the Advantage Actor Critic algorithm for fast learning, and Graph Convolutional Networks for feature extraction automation. Then, we propose an approach we call Heuristically Assisted Deep Reinforcement Learning (HA-DRL), which uses heuristics to control the learning and execution of the DRL agent. We evaluate this solution trough simulations under stationary, cycle-stationary and non-stationary network load conditions. The evaluation results show that heuristic control is an efficient way of speeding up the learning process of DRL, achieving a substantial gain in resource utilization, reducing performance degradation, and is more reliable under unpredictable changes in network load than non-controlled DRL algorithms
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Rosa, Marcos Leite [Verfasser], Sophie [Akademischer Betreuer] Wolfrum und Joana Carla Soares [Akademischer Betreuer] Goncalves. „From modern infrastructures to operational networks. : The qualification of local space at existing large scale utility infrastructure: a method for reading community-driven initiatives. The case of São Paulo. / Marcos Leite Rosa. Betreuer: Sophie Wolfrum. Gutachter: Joana Carla Soares Goncalves ; Sophie Wolfrum“. München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1081488069/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Hummel, Robert A. (Robert Andrew). „Infrastructure for large-scale tests in marine autonomy“. Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70436.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 141-147).
This thesis focuses on the development of infrastructure for research with large-scale autonomous marine vehicle fleets and the design of sampling trajectories for compressive sensing (CS). The newly developed infrastructure includes a bare-bones acoustic modem and two types of low-cost and scalable vehicles. One vehicle is a holonomic raft designed for station-keeping and precise maneuvering, and the other is a streamlined kayak for traveling longer distances. The acoustic modem, like the vehicles, is inexpensive and scalable, providing the capability of a large-scale, low-cost underwater acoustic network. With these vehicles and modems we utilize compressive sensing, a recently developed framework for sampling sparse signals that offers dramatic reductions in the number of samples required for high fidelity reconstruction of a field. Our novel CS sampling techniques introduce engineering constraints including movement and measurement costs to better apply CS to sampling with mobile agents. The vehicles and modems, along with compressive sensing, strengthen the movement towards large scale autonomy in the ocean environment.
by Robert Andrew Hummel.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Herrero-López, Sergio. „Large-scale simulator for global data infrastructure optimization“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/70759.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, February 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 165-172).
Companies depend on information systems to control their operations. During the last decade, Information Technology (IT) infrastructures have grown in scale and complexity. Any large company runs many enterprise applications that serve data to thousands of users which, in turn, consume this information in different locations concurrently and collaboratively. The understanding by the enterprise of its own systems is often limited. No one person in the organization has a complete picture of the way in which applications share and move data files between data centers. In this dissertation an IT infrastructure simulator is developed to evaluate the performance, availability and reliability of large-scale computer systems. The goal is to provide data center operators with a tool to understand the consequences of infrastructure updates. These alterations can include the deployment of new network topologies, hardware configurations or software applications. The simulator was constructed using a multilayered approach and was optimized for multicore scalability. The results produced by the simulator were validated against the real system of a Fortune 500 company. This work pioneers the simulation of large-scale IT infrastructures. It not only reproduces the behavior of data centers at a macroscopic scale, but allows operators to navigate down to the detail of individual elements, such as processors or network links. The combination of queueing networks representing hardware components with message sequences modeling enterprise software enabled reaching a scale and complexity not available in previous research in this area.
by Sergio Herrero-López.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zeng, Diqi. „Cyclone risk assessment of large-scale distributed infrastructure systems“. Thesis, University of Sydney, 2021. https://hdl.handle.net/2123/24514.

Der volle Inhalt der Quelle
Annotation:
Coastal communities are vulnerable to tropical cyclones. Community resilience assessment for hazard mitigation planning demands a whole-of-community approach to risk assessment under tropical cyclones. Community risk assessment is complicated since it must capture the spatial correlation among individual facilities due to similar demands placed by a cyclone event and similar infrastructure capacities due to common engineering practices. However, the impact of such spatial correlation has seldom been considered in cyclone risk assessment. This study develops advanced stochastic models and methodology to evaluate the collective risk of large-scale distributed infrastructure systems under a scenario tropical cyclone, considering the spatial correlations of wind demands and structural capacities modelled by fragility functions. Wind-dependent correlation of fragility functions is derived from the correlation of structural resistances using joint fragility analysis. A general probabilistic framework is proposed to evaluate the damage of infrastructure systems based on joint fragility functions, where the stochastic dependence between the fragility functions of individual facilities is approximated by a Gaussian copula. A stochastic model is developed to model the spatially correlated wind speeds from a tropical cyclone, when wind speed statistics based on three cyclone wind field models of different complexity are examined. The impact of wind speed uncertainty and spatial correlation on risk assessment is investigated by evaluating the cyclone loss of an electric power system, when three loss metrics are examined including damage ratio, power outage ratio and outage cost to electricity customers. Since the risk assessment of a large-scale infrastructure system is computationally challenging, an interpolation technique based on random field discretization is developed, which can simulate spatially correlated damage to infrastructure components in a scalable manner.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Ward, Jonathan Stuart. „Efficient monitoring of large scale infrastructure as a service clouds“. Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/6974.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has had a transformative effect upon distributed systems research. It has been one of the precursors of supposed big data revolution and has amplified the scale of software, networks, data and deployments. Monitoring tools have not, however, kept pace with these developments. Scale is central to cloud computing but it is not its chiefly defining property. Elasticity, the ability of a cloud deployment to rapidly and regularly change in scale and composition, is what differentiates cloud computing from alternative paradigms of computation. Older tools originating from cluster, grid and enterprise computing predominantly lack designs which allow them to tolerate huge scale and rapid elasticity. This has led to the development of monitoring as a service tools; third party tools which abstract the intricacies of the monitoring process from the end user. These tools rely upon an economy of scale in order to deploy large numbers of VMs or servers which monitor multiple users' infrastructure. These tools have restricted functionality and trust critical operations to third parties, which often lack reliable SLAs and which often charge significant costs. We therefore contend that an alternative is necessary. This thesis investigates the domain of cloud monitoring and proposes Varanus, a new cloud monitoring tool, which eschews conventional architectures in order to outperform current tools in a cloud setting. We compare a number of aspects of performance including monitoring latency, resource usage and elasticity tolerance. Through investigation of current monitoring approaches in conjunction with a thorough examination of cloud computing we derive a design for a new tool which leverages peer to peer and autonomic computing in order to build a tool well suited to the requirements of cloud computing. Through a detailed evaluation we demonstrate how this tool withstands the effects of scale and elasticity which impair current tools and how it employs a novel architecture which reduces fiscal costs. We demonstrate that Varanus maintains a low, near 1 second monitoring latency, regardless of both scale and elasticity and does so without imparting significant computational costs. We conclude that this design embodied by this tool represents a successful alternative to current conventional and monitoring as a service tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Jenelius, Erik. „Large-Scale Road Network Vulnerability Analysis“. Doctoral thesis, KTH, Transport och lokaliseringsanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24952.

Der volle Inhalt der Quelle
Annotation:
Disruptions in the transport system can have severe impacts for affected individuals, businesses and the society as a whole. In this research, vulnerability is seen as the risk of unplanned system disruptions, with a focus on large, rare events. Vulnerability analysis aims to provide decision support regarding preventive and restorative actions, ideally as an integrated part of the planning process.The thesis specifically develops the methodology for vulnerability analysis of road networks and considers the effects of suddenly increased travel times and cancelled trips following road link closures. The major part consists of model-based studies of different aspects of vulnerability, in particular the dichotomy of system efficiency and user equity, applied to the Swedish road network. We introduce the concepts of link importance as the overall impact of closing a particular link, and regional exposure as the impact for individuals in a particular region of, e.g., a worst-case or an average-case scenario (Paper I). By construction, a link is important if the normal flow across it is high and/or the alternatives to this link are considerably worse, while a traveller is exposed if a link closure along her normal route is likely and/or the best alternative is considerably worse. Using regression analysis we show that these relationships can be generalized to municipalities and counties, so that geographical variations in vulnerability can be explained by variations in network density and travel patterns (Paper II). The relationship between overall impacts and user disparities are also analyzed for single link closures and is found to be negative, i.e., the most important links also have the most equal distribution of impacts among individuals (Paper III).In addition to links' roles for transport efficiency, the thesis considers their importance as rerouting alternatives when other links are disrupted (Paper IV). Such redundancy-important roads, found often to be running in parallel to highways with heavy traffic, may be warranted a higher standard than their typical use would suggest. We also study the vulnerability of the road network under area-covering disruptions, representing for example flooding, heavy snowfall or forest fires (Paper V). In contrast to single link failures, the impacts of this kind of events are largely determined by the population concentration, more precisely the travel demand within, in and out of the disrupted area itself, while the density of the road network is of small influence. Finally, the thesis approaches the issue of how to value the delays that are incurred by network disruptions and, using an activity-based modelling approach, we illustrate that these delay costs may be considerably higher than the ordinary value of time, in particular during the first few days after the event when travel conditions are uncertain (Paper VI).
QC 20101004
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Tamaki, Tadatsugu 1965. „Effect of delivery systems on collaborative negotiations for large scale infrastructure projects“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9502.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1999.
Includes bibliographical references (leaves 90-92).
In large-scale projects, collaboration is an essential key for the success of projects. Since different participants from different organizations try to work together in projects, competitive stresses exist in their relationships and as a result, disputes or conflicts may inevitably occur. Pena-Mora and Wang (1998) have developed a preliminary collaborative negotiation methodology for facilitating/mediating the negotiation process of conflicts. In order for that collaborative negotiation methodology to be more detailed for its implementation, it needs to account for the effect of project structure and delivery method on the negotiation processes in large-scale projects. Because contracts define the temporary formal and informal relationships among the different parties in a project and subsequently, they define the framework of the negotiations of conflicts within that project, different delivery systems may be more or less effective in terms of conflict resolution. In this research, to study the effect of delivery system on negotiation of conflicts, first, several different project structures and delivery systems are studied in order to identify participants' roles, responsibilities, and relationships. Second, potential conflicts in relationships among project participants are examined to show that each delivery system has typical or pattern behavior that may affect the interrelationship among groups on negotiations. These patterns or characteristics of the groups and their relationship make possible to evaluate quantitatively and qualitatively the advantage or disadvantage of each delivery system in terms of conflict avoidance or dispute resolution. Then, indexes of negotiation effectiveness for each delivery system are developed in order to quantify the advantage of implementing the collaborative negotiation methodology in a large-scale project within a particular delivery system.
by Tadatsugu Tamaki.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

BEDNARCIK, ABDULHADI EMMA, und MARINA VITEZ. „The Ownership Structure Dilemma and its Implications on the Transition from Small-Scale to Large-Scale Electric Road Systems“. Thesis, KTH, Industriell Management, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191130.

Der volle Inhalt der Quelle
Annotation:
This master thesis is written on behalf of KTH Royal Institute of Technology and the Swedish National Road and Transport Research Institute (VTI). The study investigates how infrastructure ownership could affect the transition from small-scale to large-scale electric road systems (ERS) and how infrastructure ownership affects the foreseen future roles of the ERS stakeholders. The authors have used a qualitative research method, including a literature study within the areas of infrastructure transitions and infrastructure ownership and a case study on ERS. Conclusions are based on the chosen theoretical framework and the empirical findings from conducted interviews within the following stakeholder segments; agencies, electric utilities, road carriers, construction firms and road power technology firms. The transport system is a large sociotechnical system, which is characterized by a high level of complexity, capital intensity and asset durability which makes it difficult to accomplish radical system transitions. Political regulations and progressive environmental targets have created a demand for new solutions within the transport system. One widely discussed, possible solution is ERS, which are considered to be beneficial from both an environmental and socio-economic perspective. The main identified barriers for a transition to ERS are related to the complex system design. Further, the matter of how the ERS infrastructure should be owned and financed remains unclear. It will be argued that the government needs to play a key role, both as a coordinator and financier, during the initial phase of an ERS expansion. In order to obtain a high level of competence, which is considered as vital, it is important with close cooperation between different public and private stakeholders and to have a procurement process which is strongly focused on functionality. The authors suggest that in order to decrease system complexity and increase stakeholder cooperation, cross-sectorial system suppliers should be formed. During an initial deployment of ERS towards a national system, it is suggested to only have one cross-sectorial system supplier which manages the constructions and operations of ERS, in order to decrease complexity and increase knowledge. As the system and technology matures and knowledge regarding ERS has been established, it is suggested by the authors to introduce competition at the cross-sectorial system supplier level nationally. There are many barriers for public private partnerships (PPP) during an initial expansion phase of ERS due to large investments, immature technology and the necessity for an overall control of a large-scale system. In addition, early investments in a large-scale system is considered as unattractive among private actors due to the high risks. However, it will be argued that PPP structures or private ownerships are suitable in closed systems as the level of complexity is lower. These systems should be subsidized by the government as they will drive innovation and stimulate the development. Depending on the degree of capital intensity and governmental regulations, PPP structures could become suitable also in a national system, when the system has matured. The suggested stakeholder structure with cross-sectorial system suppliers facilitates for a possible future PPP structure.
Denna masteruppsats är skriven på uppdrag av Kungliga Tekniska Högskolan och Statens väg- och transportforskningsinstitut (VTI). I studien undersöks hur ägarskap av infrastruktur skulle kunna påverka skiftet från småskaliga till storskaliga elvägssystem och hur ägarskapet av infrastrukturen påverkar de förutsedda framtida rollerna hos elvägssystemets intressenter. Författarna har använt sig av en kvalitativ forskningsmetod, vilken inkluderar en litteraturstudie inom områden för infrastrukturskiften och ägarskap av infrastruktur samt en fallstudie inom elvägssystem. Slutsatser är baserade på det valda teoretiska ramverket och de empiriska resultaten från de genomförda intervjuerna inom följande intressentsegment; myndigheter, energibolag, godstransportörer, konstruktionsfirmor och tillverkare av elvägsinfrastruktur. Transportsystemet är ett stort sociotekniskt system, vilket karakteriseras av en hög nivå av komplexitet, kapitalintensitet och lång livslängd på tillgångar, vilket gör det svårt att uppnå radikala systemskiften. Politiska regleringar och progressiva miljömål har skapat ett behov för nya lösningar inom transportsystemet. En diskuterad möjlig lösning är elvägssystem, vilket anses vara fördelaktigt både från ett miljömässigt och socioekonomiskt perspektiv. De huvudsakliga identifierade barriärerna för ett skifte till ett elvägssystem är relaterade till den komplexa systemdesignen. Vidare är frågan rörande hur infrastrukturen till ett elvägssystem ska ägas och finansieras fortfarande oklar. Det kommer att argumenteras för att staten behöver ha en nyckelroll, både som koordinator och finansiär, under den initiala expansionsfasen av ett elvägssystem. För att uppnå en hög nivå av kompetens, vilket anses vara avgörande, så är det viktigt med ett nära samarbete mellan olika statliga och privata intressenter och att ha en upphandlingsprocess som starkt fokuserar på funktionalitet. Författarna föreslår att för att minska systemets komplexitet och öka intressenternas samarbete, så borde tvärsektoriella systemleverantörer formas. Under en initial utbredning av elvägssystem mot ett nationellt system, så föreslås det att enbart ha en tvärsektoriell systemleverantör som sköter konstruktion och verksamhet av elvägssystemet för att minska komplexiteten och öka kunskapen. Allt eftersom att systemet och teknologin mognar och kunskap om elvägssystem etableras, så föreslår författarna att konkurrens ska introduceras på tvärsektoriell systemleverantörsnivå nationellt. Det finns många barriärer för offentlig-privat samverkan (OPS) under den initiala expansionsfasen av elvägssystem på grund av stora investeringar, omogen teknologi och behovet av övergripande kontroll i ett storskaligt system. Dessutom anses tidiga investeringar i ett storskaligt system vara oattraktivt hos de privata aktörerna på grund av de höga riskerna. Det kan dock argumenteras för att OPS-strukturer eller privat ägande är passande för slutna system då nivån av komplexitet är lägre. Dessa system borde subventioneras av staten då de kommer driva innovation och stimulera utvecklingen. Beroende på graden av kapitalintensitet och statliga regleringar, skulle OPS-strukturer också kunna vara lämpliga för ett nationellt system, när systemet har mognat. De föreslagna intressentstrukturerna med tvärsektoriella systemleverantörer underlättar för en möjlig framtida OPS-struktur
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Abedi, Solaleh, Marvin Lannefeld, Elizabeth Moore und Elin Olsson. „Sustainable Physical Legacy Development via Large-Scale International Sport Events“. Thesis, Blekinge Tekniska Högskola, Institutionen för strategisk hållbar utveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19634.

Der volle Inhalt der Quelle
Annotation:
In an increasingly urban society, cities pose both challenges and opportunities to move towards a more sustainable society. This study examines the role of large-scale international sport events in sustainable development within host cities, with a focus on the physical legacies that they leave behind. The research seeks to offer guidance to enhance sustainable physical legacy development, informed by Games’ strategy documents, impacts on host cities and professional opinions. The research was conducted using three key methods: an examination of key strategy documents, a literature review of academic and grey literature to record infrastructure projects and interviews with professionals who had worked with four specific Games (Vancouver 2010, London 2012, Gold Coast 2018 and Birmingham 2022). The findings implied that social infrastructure and transport projects were most commonly recorded and that the sport event industry operates with a Triple Bottom Line understanding of sustainability. Based on the findings, a design thinking framework was used to design and propose guidelines. The guidelines recommend a shift to the 3-nested dependencies model and propose the development of key skills (leadership for sustainability and flexibility) and key actions (sustainability education/communication and audit).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Mills-Novoa, Megan, und Rossi Taboada Hermoza. „Coexistence and Conflict: IWRM and Large-Scale Water Infrastructure Development in Piura, Peru“. WATER ALTERNATIVES ASSOC, 2017. http://hdl.handle.net/10150/624755.

Der volle Inhalt der Quelle
Annotation:
Despite the emphasis of Integrated Water Resources Management (IWRM) on 'soft' demand-side management, large-scale water infrastructure is increasingly being constructed in basins managed under an IWRM framework. While there has been substantial research on IWRM, few scholars have unpacked how IWRM and large-scale water infrastructure development coexist and conflict. Piura, Peru is an important site for understanding how IWRM and capital-intensive, concrete-heavy water infrastructure development articulate in practice. After 70 years of proposals and planning, the Regional Government of Piura began construction of the mega-irrigation project, Proyecto Especial de Irrigacion e Hidroelectrico del Alto Piura (PEIHAP) in 2013. PEIHAP, which will irrigate an additional 19,000 hectares (ha), is being realised in the wake of major reforms in the Chira-Piura River Basin, a pilot basin for the IWRM-inspired 2009 Water Resources Law. We first map the historical trajectory of PEIHAP as it mirrors the shifting political priorities of the Peruvian state. We then draw on interviews with the newly formed River Basin Council, regional government, PEIHAP, and civil society actors to understand why and how these differing water management paradigms coexist. We find that while the 2009 Water Resources Law labels large-scale irrigation infrastructure as an 'exceptional measure', this development continues to eclipse IWRM provisions of the new law. This uneasy coexistence reflects the parallel desires of the state to imbue water policy reform with international credibility via IWRM while also furthering economic development goals via large-scale water infrastructure. While the participatory mechanisms and expertise of IWRM-inspired river basin councils have not been brought to bear on the approval and construction of PEIHAP, these institutions will play a crucial role in managing the myriad resource and social conflicts that are likely to result.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wheatley, Andrew B. „Enhancing crisis response capability to large-scale system failures within transportation networks“. Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/72872/1/Andrew_Wheatley_Thesis.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

El, Ajaltouni Elie Antoine. „Efficient dynamic load balancing techniques for large scale distributed simulations on a grid infrastructure“. Thesis, University of Ottawa (Canada), 2009. http://hdl.handle.net/10393/28209.

Der volle Inhalt der Quelle
Annotation:
Dynamic load balancing is a key factor in achieving high performance for large scale distributed simulations on grid infrastructures. In a grid environment, the available resources and the simulation's computation and communication behavior may experience run-time critical imbalances. Consequently, an initial static partitioning should be combined with a dynamic load balancing scheme to ensure the high performance of the distributed simulation. In this paper we propose a dynamic load balancing scheme for distributed simulations on a grid infrastructure. Our scheme is composed of an online network analyzing service coupled with monitoring agents and a run-time model repartitioning service. We present a hierarchical scalable adaptive JXTA service based scheme and demonstrate through simulation experiments that our proposed scheme exhibits better performance in terms of the simulation execution time. Furthermore, we extend our algorithm from a local intra-cluster algorithm to a global inter-cluster algorithm and we consider studying the proposed global design through a formalized Discrete Event System Specification (DEVS) model system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bosso, Doran Joseph. „Effectiveness of Contemporary Public-Private Partnerships for Large Scale Infrastructure in the United States“. Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32032.

Der volle Inhalt der Quelle
Annotation:
Increasingly, states are relying on creative financing and asset management to maintain and improve the nationâ s transportation infrastructure since budgetary challenges constrain potential options. One method of tapping into alternative sources of capital is the public-private partnership (PPP or P3). A public-private partnership is a long-term contractual agreement in which the public sector authority assigns a traditionally public responsibility (such as operations and/or financing) to the private sector participant, in hopes of achieving mutual benefit. First employed in the contemporary era in the late 1980â s by California and Virginia, the public-private partnership has continued to become a more popular delivery method. A thorough review of the literature on the subject reveals both academic and institutional material covering a wide variety of P3 topics. Garvinâ s (2007) P3 Equilibrium Framework supplemented the current body of knowledge by building upon past research to better analyze the performance of existing and proposed PPPâ s or serve as a resource when developing future projects. The Framework allows the user to assess a project or program and determine its potential for producing desirable results. This research utilizes case studies to gain further insight into P3 projects and programs, as well as the performance of the original P3 Equilibrium Framework. The cases include the evolution of legislation in California and Virginia, and four projects that resulted from these programs: the State Route 91 Express Lanes, Dulles Greenway, Pocahontas Parkway, and failed I-81 Improvement proposals. Application of the original framework to the case studies led to several refinements. The changes provide more comprehensive appraisal mechanisms and improve the applicability and consistency of the P3 Equilibrium Framework. In addition, the concept of â tensionâ is introduced, which in effect is a means of describing the stress between the interested parties of a P3 arrangement. Ultimately, the revised Framework helps to structure perspectives of P3 arrangements and is underpinned by the notion that these strategies must balance the interests of society, the state, industry, and the market for ultimate success.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Fulenwider, Margaret (Margaret Ann) 1973. „Dynamic planning and control for large-scale infrastructure projects : route 3N as a case study“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84788.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kraus, K. (Klemens). „Security management process in distributed, large scale high performance systems“. Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526206783.

Der volle Inhalt der Quelle
Annotation:
Abstract In recent years the number of attacks on critical infrastructure has not only increased substantially but such attacks have also shown higher sophistication. With the increasing interconnection of information systems it is common that critical systems communicate and share information outside an organization’s networks for many different scenarios. In the academic world as well as in existing security implementations, focus is placed on individual aspects of the security process - for example, network security, legal and regulatory compliance and privacy - without considering the process on the whole. This work focuses on solving this security gap of critical infrastructure by providing solutions for emerging attack vectors. Using design science research methods, a model was developed that seeks to combine these individual security aspects to form a complete security management process (SMP). This SMP introduces, among others theories of security topics, recommended best practices and a security organization structure. An instantiation of the SMP model was implemented for a large-scale critical infrastructure. This work introduces the system developed, its architecture, personnel hierarchy and security relevant workflows. Due to employed surveillance networks, specialized requirements for bandwidth utilization while preserving data security were present. Thus algorithms for solving these requirements are introduced as sub-constructs. Other focus points are the managerial aspects of sensors deployed in surveillance networks and the automatic processing of the sensor data to perform data fusion. Algorithms for both tasks were developed for the specific system but could be generalized for other instantiations. Verification was performed by empirical studies of the instantiation in two separate steps. First the instantiation of the SMP was analyzed as a whole. One of the main quality factors of the instantiation is incident response time, especially in complex scenarios. Consequently measurements of response times when handling incidents compared to the traditional system were performed in different scenarios. System usability was then verified by user acceptance tests of operators and administrators. Both studies indicate significant improvements compared to traditional security systems. Secondly, the sub-constructs communication optimizations and the data fusion algorithm were verified showing substantial improvements in their corresponding areas
Tiivistelmä Viime vuosina kriittisiin infrastruktuureihin on kohdistunut merkittävästi aiempaa enemmän erilaisia hyökkäyksiä. Tietojärjestelmien välisten yhteyksien lisääntymisen myötä myös kriittiset järjestelmät kommunikoivat nykyään keskenään ja jakavat tietoa organisaation sisäisten verkkojen ulkopuolellekin. Akateemisessa tutkimuksessa ja turvajärjestelmien toteutuksissa on huomio kohdistettu turvallisuutta koskevien prosessien yksittäisiin piirteisiin, kuten esimerkiksi verkkojen turvallisuuteen, lakien ja sääntöjen noudattamiseen ja yksityisyyteen, miettimättä prosesseja kokonaisuutena. Väitöstutkimuksen tavoitteena on ollut ratkaista tämä kriittisten infrastruktuurien turvallisuusongelma tarjoamalla ratkaisuja, jotka paljastavat mahdollisia hyökkäysreittejä. Väitöstutkimuksessa kehitettiin suunnittelutieteellisen tutkimuksen avulla lähestymistapa, joka yhdistää yksittäiset turvallisuusnäkökohdat ja muodostaa näin turvallisuuden kokonaishallinnan prosessin mallin. Malli hyödyntää erilaisia turvallisuusteorioita, suositeltuja hyviä käytäntöjä ja turvallisen organisaation rakennemalleja. Mallista kehitettiin esimerkkitoteutus laajamittaista kriittistä infrastruktuuria varten. Tämä väitöskirja esittelee kehitetyn järjestelmän, sen arkkitehtuurin, henkilökuntahierarkian ja turvallisuuden kannalta relevantit työnkulkukaaviot. Työssä huomioitiin laajan valvontaverkoston edellyttämät erityisvaatimukset tilanteessa, jossa tietoturvallisuuden säilyttäminen oli tärkeää. Myös näiden erityisvaatimuksiin liittyvien mallin osien ratkaisualgoritmit esitetään. Muita työn tuotoksia ovat hallinnolliset näkökulmat, jotka on huomioitava, kun valvonnalle tärkeitä sensoreita hallinnoidaan ja sensorien tuottamaa dataa yhdistellään. Algoritmit luotiin esimerkkiympäristöön, mutta niitä on mahdollista soveltaa muihinkin toteutuksiin. Toteutuksen oikeellisuuden todentamisessa käytettiin empiirisiä ympäristöjä kahdessa eri vaiheessa. Ensiksi turvallisuusprosessin kokonaishallinnan malli analysoitiin kokonaisuutena. Merkittävä laatutekijä oli havaintotapahtuman vasteaika erityisesti monimutkaisissa skenaarioissa. Siksi työssä esitellään eri skenaarioiden avulla tapahtumanhallinnan vasteaikojen mittauksia suhteessa perinteisiin järjestelmiin. Tämän jälkeen järjestelmän käytettävyys todennettiin operaattorien ja hallintohenkilöstön kanssa tehtyjen hyväksymistestien avulla. Testit osoittivat huomattavaa parannusta verrattuna perinteisiin turvajärjestelmiin. Toiseksi verifiointiin mallin osien kommunikaation optimointi ja algoritmien toimivuus erikseen ja niissäkin ilmeni huomattavia parannuksia perinteisiin järjestelmiin verrattuna
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Maguire, Laura Marie Dose. „Controlling the Costs of Coordination in Large-scale Distributed Software Systems“. The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593661547087969.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Smithers, Clay. „Managing Geographic Data as an Asset: A Case Study in Large Scale Data Management“. [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002761.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Arnold, Erik Paul, Peter D. Cohen, Gina Eva Flanagan, Anna Patricia Nolin und Henry J. Turner. „Framing Innovation: The Impact of the Superintendent's Technology Infrastructure Decisions on the Acceptance of Large-Scale Technology Initiatives“. Thesis, Boston College, 2014. http://hdl.handle.net/2345/3800.

Der volle Inhalt der Quelle
Annotation:
Thesis advisor: Diana C. Pullin
Thesis advisor: Vincent Cho
A multiple-case qualitative study of five school districts that had implemented various large-scale technology initiatives was conducted to describe what superintendents do to gain acceptance of those initiatives. The large-scale technology initiatives in the five participating districts included 1:1 District-Provided Device laptop and tablet programs (DPD), a Bring Your Own Device program (BYOD), and a Blended program that included a district-sponsored Lease-To-Own laptop and tablet program (LTO). Superintendents and other personnel that were identified by each superintendent as having a key role with the technology initiative were interviewed. Key documentation regarding the large-scale technology initiative was also reviewed. To help bring perspective to the actions of superintendents surrounding large-scale technology initiatives, frame theory was used as a theoretical framework for the overall study. This study sought to determine the factors considered by superintendents in making decisions about technology infrastructure, the factors considered in making decisions about funding a large-scale technology initiative, and how technology infrastructure or funding decisions impacted the perceived acceptance of the initiative. The study found that the decisions made by superintendents with regard to the technology initiative can have an impact on the acceptance of the initiative by all stakeholders. The importance of robust and reliable Wi-Fi networks, funding for technology initiatives from multiple sources, and the significance of device capabilities and reliability were also identified as significant factors in the acceptance of large-scale technology initiatives
Thesis (EdD) — Boston College, 2014
Submitted to: Boston College. Lynch School of Education
Discipline: Educational Leadership and Higher Education
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Westerlund, Lovisa. „Barriers to large-scale electrification of passenger cars for a fossil independent Sweden by 2030“. Thesis, KTH, Materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298423.

Der volle Inhalt der Quelle
Annotation:
Passenger  cars  account  for  a  large  share  of  Sweden’s  total  greenhouse  gas emissions  and  contribute  to  increased  climate  impact. In  a  climate  policy framework previously adopted by the government, it was determined that Sweden will  have  no  net  emissions  of  greenhouse  gases  into  the  atmosphere  by 2045. An  important  area  of  action  to  achieve  the  environmental  quality objectives  is the  transition  from  internal  combustion  engine  cars  to  electric cars  as  these have  very  low  emissions  or  no  emissions  at  all. Despite the electric  car’s many  advantages,  there  are  several  barriers  to  enabling  the transition  to  a fossil independent passenger car fleet.   This thesis aims to describe barriers to a  national  large­scale  electrification  of  passenger  cars  from an  industrial  and governmental  point  of  view.   Through  semi­structured  expert interviews  from the  public  and  private  sector  followed  by  thematic  analysis, several  themes were generated from the interview data. The results from the qualitative study indicate  that  there  are  a  total  of  six barriers  to  achieve one million  electric  cars by  2030: lack  of  charging infrastructure, unbalanced political  instruments, uncertain technological development, high  purchase price,   dissemination of incorrect information and electric car export, which can be complied as three main barriers:  lack of charging infrastructure, unbalanced political instruments and dissemination of incorrect information.
Personbilar står för en stor del av Sveriges totala växthusgasutsläpp och bidrar till ökad  klimatpåverkan. I  ett  klimatpolitiskt  ramverk  som  tidigare  antogs av regeringen  så  fastställdes  det  att  Sverige  inte  ska  ha  några  nettoutsläpp av växthusgaser i atmosfären år 2045.  Ett viktigt åtgärdsområde för att uppnå de miljökvalitativa målen är omställningen från förbränningsmotorbilar till eldrivna bilar  då  dessa  har  mycket  låga  utsläpp  eller  inga  utsläpp  alls. Trots elbilens många  fördelar  så  finns  det  flertalet  hinder  för  att  möjliggöra omställningen till   en   fossiloberoende   personbilsflotta. Den   här  rapporten  syftar   till   att beskriva hinder för en nationell storskalig elektrifiering av personbilar från ett industriellt  och  statligt  perspektiv.   Genom semistrukturerade  expertintervjuer från  offentlig  och  privat  sektor  följt  av tematisk  analys  så  har  flera  teman genererats från intervjudatan. Resultatet från den kvalitativa studien indikerar att det sammantaget finns sex hinder för att uppnå en miljon elbilar år 2030: brist på laddinfrastruktur, obalanserade politiska styrmedel, osäker teknisk utveckling, högt  inköpspris, spridning  av  inkorrekt information  och  elbilsexport,  som  kan sammanställas som  tre  huvudsakliga barriärer: brist  på  laddinfrastruktur, obalanserade politiska styrmedel och spridning av inkorrekt information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Gether, Kaare. „Transition to Large Scale Use of Hydrogen and Sustainable Energy Services and nonlinearity : Choices of technology and infrastructure under path dependence, feedback“. Doctoral thesis, Norwegian University of Science and Technology, Department of Energy and Process Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-213.

Der volle Inhalt der Quelle
Annotation:

We live in a world of becoming. The future is not given, but forms continuously in dynamic processes where path dependence plays a major role. There are many different possible futures. What we actually end up with is determined in part by chance and in part by the decisions we make. To make sound decisions we require models that are flexible enough to identify opportunities and to help us choose options that lead to advantageous alternatives. This way of thinking differs from traditional cost-benefit analysis that employs net present value calculations to choose on purely economic grounds, without regard to future consequences.

Time and dynamic behaviour introduce a separate perspective. There is a focus on change, and decisions acquire windows of opportunity: the right decision at the right time may lead to substantial change, while it will have little effect if too early or too late. Modelling needs to reflect this dynamic behaviour. It is the perspective of time and dynamics that leads to a focus on sustainability, and thereby the role hydrogen might play in a future energy system. The present work develops a particular understanding relevant to energy infrastructures.

Central elements of this understanding are:

- Competition

- Market preference and choice beyond costs

- Bounded rationality

- Uncertainty and risk

- Irreversibility

- Increasing returns

- Path dependence

- Feedback

- Delay

- Nonlinear behaviour

Change towards a “hydrogen economy” will involve far-reaching change away from our existing energy infrastructure. This infrastructure is viewed as a dynamic set of interacting technologies (value sequences) that provide services to end-users and uphold the required supply of energy for this, all the way from primary energy sources. The individual technologies also develop with time.

Building on this understanding and analysis, an analytical tool has emerged: the Energy Infrastructure Competition (EICOMP) model. In the model each technology is characterised by a capacity, an ordered-, and an actually delivered volume of energy services. It is further characterised through physical description with parameters like efficiency, time required for extending capacity and improvement by learning. Finally, each technology has an attractiveness, composed of costs, quality and availability, that determines the outcome of competition.

Change away from our present energy infrastructure into a sustainable one based on renewable energy sources, will entail substantial change in most aspects of technology, organisation and ownership. Central results from the overall work are:

- Change is dynamic and deeply influenced through situations with reinforcing feedback and path dependence. Due to this, there is a need for long-term perspectives in today's decision making: decisions have windows of opportunity and need to be made at the proper time.

- Strategies aimed at achieving change should team up with reinforcing feedback and avoid overwhelming balancing feedback that counteracts change.

- The EICOMP model is now available as a tool for furthe analysis of our existing energy infrastructure and its dynamic development into possible, alternative energy futures. As the model is intended for practical guidance in decisions, a central practical aim has been to allow it to be used close to where decisions are actually made; i.e. decentralised and locally in firms and in public institutions. In this respect much effort has been made in an attempt to make it transparent and easy to communicate.

- The EICOMP model may be used to analyse situations of reinforcing feedback throughout the alternative energy infrastructures that we may come to have in the future.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ravichandran, Pravin Karthick, und Santhosh Keerthi Balmuri. „Evaluation of different Cloud Environments and Services related to large scale organizations(Swedish Armed forces)“. Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20556.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing (CC) is one of the fast growing computer network technologies and many companies offer their services through cloud network. Cloud Computing has many properties with respect to the existing traditional service provisions like scalability, availability, fault tolerance, capability and so on which are supported by many IT companies like Google, Amazon, Salesforce.com. These IT companies have more chances to adapt their services into a new environment, known as Cloud computing systems. There are many cloud computing services which are being provided by many IT companies.The purpose of this thesis is to investigate which cloud environment (public, private and hybrid) and services (Infrastructure as a Service, Software as a Service, and Platform as a Service) are suitable for Swedish Armed Forces (SWAF) with respect to performance, security, cost, flexibility and functionality. SWAF is using private (internal) cloud for communications where both sensitive and non-sensitive information are located in the internal cloud. There are problems like maintenance of hardware, cost issues and secure communication while maintaining the private cloud. In order to overcome those problems we have suggested a hybrid and community cloud environment and SaaS, IaaS, PaaS services for SWAF.For suggesting these cloud environments and cloud services we have performed a literature study and two empirical studies (survey and interviews) with different organizations.A new cloud model is designed based on the suggested cloud environment, separate storage spaces for sensitive and non-sensitive information, suitable services and an effective infrastructure for sharing the internal information for SWAF.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Bajpai, Vaibhav [Verfasser], Jürgen [Akademischer Betreuer] [Gutachter] Schönwälder, Kinga [Gutachter] Lipskoch und Turck Filip [Gutachter] De. „Understanding the Impact of Network Infrastructure Changes using Large-Scale Measurement Platforms / Vaibhav Bajpai. Betreuer: Jürgen Schönwälder. Gutachter: Jürgen Schönwälder ; Kinga Lipskoch ; Filip De Turck“. Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2016. http://d-nb.info/1111884455/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

RAMONDETTI, LEONARDO. „The Enriched Field. Urbanising the Central Plains of China“. Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2842525.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Covi, Patrick. „Multi-hazard analysis of steel structures subjected to fire following earthquake“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/313383.

Der volle Inhalt der Quelle
Annotation:
Fires following earthquake (FFE) have historically produced enormous post-earthquake damage and losses in terms of lives, buildings and economic costs, like the San Francisco earthquake (1906), the Kobe earthquake (1995), the Turkey earthquake (2011), the Tohoku earthquake (2011) and the Christchurch earthquakes (2011). The structural fire performance can worsen significantly because the fire acts on a structure damaged by the seismic event. On these premises, the purpose of this work is the investigation of the experimental and numerical response of structural and non-structural components of steel structures subjected to fire following earthquake (FFE) to increase the knowledge and provide a robust framework for hybrid fire testing and hybrid fire following earthquake testing. A partitioned algorithm to test a real case study with substructuring techniques was developed. The framework is developed in MATLAB and it is also based on the implementation of nonlinear finite elements to model the effects of earthquake forces and post-earthquake effects such as fire and thermal loads on structures. These elements should be able to capture geometrical and mechanical non-linearities to deal with large displacements. Two numerical validation procedures of the partitioned algorithm simulating two virtual hybrid fire testing and one virtual hybrid seismic testing were carried out. Two sets of experimental tests in two different laboratories were performed to provide valuable data for the calibration and comparison of numerical finite element case studies reproducing the conditions used in the tests. Another goal of this thesis is to develop a fire following earthquake numerical framework based on a modified version of the OpenSees software and several scripts developed in MATLAB to perform probabilistic analyses of structures subjected to FFE. A new material class, namely SteelFFEThermal, was implemented to simulate the steel behaviour subjected to FFE events.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Covi, Patrick. „Multi-hazard analysis of steel structures subjected to fire following earthquake“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/313383.

Der volle Inhalt der Quelle
Annotation:
Fires following earthquake (FFE) have historically produced enormous post-earthquake damage and losses in terms of lives, buildings and economic costs, like the San Francisco earthquake (1906), the Kobe earthquake (1995), the Turkey earthquake (2011), the Tohoku earthquake (2011) and the Christchurch earthquakes (2011). The structural fire performance can worsen significantly because the fire acts on a structure damaged by the seismic event. On these premises, the purpose of this work is the investigation of the experimental and numerical response of structural and non-structural components of steel structures subjected to fire following earthquake (FFE) to increase the knowledge and provide a robust framework for hybrid fire testing and hybrid fire following earthquake testing. A partitioned algorithm to test a real case study with substructuring techniques was developed. The framework is developed in MATLAB and it is also based on the implementation of nonlinear finite elements to model the effects of earthquake forces and post-earthquake effects such as fire and thermal loads on structures. These elements should be able to capture geometrical and mechanical non-linearities to deal with large displacements. Two numerical validation procedures of the partitioned algorithm simulating two virtual hybrid fire testing and one virtual hybrid seismic testing were carried out. Two sets of experimental tests in two different laboratories were performed to provide valuable data for the calibration and comparison of numerical finite element case studies reproducing the conditions used in the tests. Another goal of this thesis is to develop a fire following earthquake numerical framework based on a modified version of the OpenSees software and several scripts developed in MATLAB to perform probabilistic analyses of structures subjected to FFE. A new material class, namely SteelFFEThermal, was implemented to simulate the steel behaviour subjected to FFE events.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Braun, Johannes. „Maintaining Security and Trust in Large Scale Public Key Infrastructures“. Phd thesis, 2015. https://tuprints.ulb.tu-darmstadt.de/4566/1/Maintaining%20Security%20and%20Trust%20in%20Large%20Scale%20Public%20Key%20Infrastructures.pdf.

Der volle Inhalt der Quelle
Annotation:
In Public Key Infrastructures (PKIs), trusted Certification Authorities (CAs) issue public key certificates which bind public keys to the identities of their owners. This enables the authentication of public keys which is a basic prerequisite for the use of digital signatures and public key encryption. These in turn are enablers for e-business, e-government and many other applications, because they allow for secure electronic communication. With the Internet being the primary communication medium in many areas of economic, social, and political life, the so-called Web PKI plays a central role. The Web PKI denotes the global PKI which enables the authentication of the public keys of web servers within the TLS protocol and thus serves as the basis for secure communications over the Internet. However, the use of PKIs in practice bears many unsolved problems. Numerous security incidents in recent years have revealed weaknesses of the Web PKI. Because of these weaknesses, the security of Internet communication is increasingly questioned. Central issues are (1) the globally predefined trust in hundreds of CAs by browsers and operating systems. These CAs are subject to a variety of jurisdictions and differing security policies, while it is sufficient to compromise a single CA in order to break the security provided by the Web PKI. And (2) the handling of revocation of certificates. Revocation is required to invalidate certificates, e.g., if they were erroneously issued or the associated private key has been compromised. Only this can prevent their misuse by attackers. Yet, revocation is only effective if it is published in a reliable way. This turned out to be a difficult problem in the context of the Web PKI. Furthermore, the fact that often a great variety of services depends on a single CA is a serious problem. As a result, it is often almost impossible to revoke a CA's certificate. However, this is exactly what is necessary to prevent the malicious issuance of certificates with the CA's key if it turns out that a CA is in fact not trustworthy or the CA's systems have been compromised. In this thesis, we therefore turn to the question of how to ensure that the CAs an Internet user trusts in are actually trustworthy. Based on an in depth analysis of the Web PKI, we present solutions for the different issues. In this thesis, the feasibility and practicality of the presented solutions is of central importance. From the problem analysis, which includes the evaluation of past security incidents and previous scientific work on the matter, we derive requirements for a practical solution. For the solution of problem (1), we introduce user-centric trust management for the Web PKI. This allows to individually reduce the number of CAs a user trusts in to a fraction of the original number. This significantly reduces the risk to rely on a CA, which is actually not trustworthy. The assessment of a CA's trustworthiness is user dependent and evidence-based. In addition, the method allows to monitor the revocation status for the certificates relevant to a user. This solves the first part of problem (2). Our solution can be realized within the existing infrastructure without introducing significant overhead or usability issues. Additionally, we present an extension by online service providers. This enables to share locally collected trust information with other users and thus, to improve the necessary bootstrapping of the system. Moreover, an efficient detection mechanism for untrustworthy CAs is realized. In regard to the second part of problem (2), we present a CA revocation tolerant PKI construction based on forward secure signature schemes (FSS). Forward security means that even in case of a key compromise, previously generated signatures can still be trusted. This makes it possible to implement revocation mechanisms such that CA certificates can be revoked, without compromising the availability of dependent web services. We describe how the Web PKI can be transitioned to a CA revocation tolerant PKI taking into account the relevant standards. The techniques developed in this thesis also enable us to address the related problem of ``non-repudiation'' of digital signatures. Non-repudiation is an important security goal for many e-business and e-government applications. Yet, non-repudiation is not guaranteed by standard PKIs. Current solutions, which are based on time-stamps generated by trusted third parties, are inefficient and costly. In this work, we show how non-repudiation can be made a standard property of PKIs. This makes time-stamps obsolete. The techniques presented in this thesis are evaluated in terms of practicality and performance. This is based on theoretical results as well as on experimental analyses. Our results show that the proposed methods are superior to previous approaches. In summary, this thesis presents mechanisms which make the practical use of PKIs more secure and more efficient and demonstrates the practicability of the presented techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Han, Song doctor of computer sciences. „Networking infrastructure and data management for large-scale cyber-physical systems“. 2012. http://hdl.handle.net/2152/19581.

Der volle Inhalt der Quelle
Annotation:
A cyber-physical system (CPS) is a system featuring a tight combination of, and coordination between, the system’s computational and physical elements. A large-scale CPS usually consists of several subsystems which are formed by networked sensors and actuators, and deployed in different locations. These subsystems interact with the physical world and execute specific monitoring and control functions. How to organize the sensors and actuators inside each subsystem and interconnect these physically separated subsystems together to achieve secure, reliable and real-time communication is a big challenge. In this thesis, we first present a TDMA-based low-power and secure real-time wireless protocol. This protocol can serve as an ideal communication infrastructure for CPS subsystems which require flexible topology control, secure and reliable communication and adjustable real-time service support. We then describe the network management techniques designed for ensuring the reliable routing and real-time services inside the subsystems and data management techniques for maintaining the quality of the sampled data from the physical world. To evaluate these proposed techniques, we built a prototype system and deployed it in different environments for performance measurement. We also present a light-weighted and scalable solution for interconnecting heterogeneous CPS subsystems together through a slim IP adaptation layer and a constrained application protocol layer. This approach makes the underlying connectivity technologies transparent to the application developers thus enables rapid application development and efficient migration among different CPS platforms. At the end of this thesis, we present a semi-autonomous robotic system called cyberphysical avatar. The cyberphysical avatar is built based on our proposed network infrastructure and data management techniques. By integrating recent advance in body-compliant control in robotics, and neuroevolution in machine learning, the cyberphysical avatar can adjust to an unstructured environment and perform physical tasks subject to critical timing constraints while under human supervision.
text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Chen, Jui-Fa, und 陳瑞發. „The Infrastructure of Networked Simulation Environment for Large Scale Virtual World“. Thesis, 1997. http://ndltd.ncl.edu.tw/handle/56432554237988353171.

Der volle Inhalt der Quelle
Annotation:
博士
淡江大學
資訊工程學系
86
The construction of large scale virtual world has been a long- stated goal of virtual environment proponents and now is a major objective of both commercial and government organizations. However, there exist major technical challenges that will require new network hardware/software architectures for distributed virtual environments. In this thesis, we have developed a Networked Simulation Environment Infrastructure (NSEI) for large scale distributed simulation. In a networked virtual environment for simulation, especially those involving a large number of interacting simulation entities, require a simulation management to synchronize active simulation entities and conduct exercise. In order to solve the simulation manager addressing problem, a new protocol called "Simulation Manager Address Resolution Protocol (SMARP)" is proposed for the simulation entities to acquire the simulation manager address across the network in Simulation Application Infrastructure of NSEI. Throughout a simulation exercise, the state information associated with the interactions that take place between simulation entities needs to be exchanged through network. The purpose of this thesis in Communication Service Infrastructure of NSEI is to propose a protocol, called "Transmission Service Protocol (TSP)," to support communication services necessary for NSEI. Finally, a new methodology of measuring software complexity assures the quality and reliability of TSP routing algorithm is proposed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Binhomaid, Omar. „Comparison between Optimization and Heuristic Methods for Large-Scale Infrastructure Rehabilitation Programs“. Thesis, 2012. http://hdl.handle.net/10012/7043.

Der volle Inhalt der Quelle
Annotation:
Civil infrastructure systems are the foundation of economic growth and prosperity in all nations. In recent years, infrastructure rehabilitation has been a focus of attention in North America and around the world. A large percentage of existing infrastructure assets is deteriorating due to harsh environmental conditions, insufficient capacity, and age. Ideally, an assets management system would include functions such as condition assessment, deterioration modeling, repair modeling, life-cycle cost analysis, and asset prioritization for repair along a planning horizon. While many asset management systems have been introduced in the literature, few or no studies have reported on the performance of either optimization or heuristic tools on large-scale networks of assets. This research presents an extensive comparison between heuristic and genetic-algorithm optimization methods for handling large-scale rehabilitation programs. Heuristic and optimization fund-allocation approaches have been developed for three case studies obtained from the literature related to buildings, pavements, and bridges with different life cycle cost analysis (LCCA) formulations. Large-scale networks were constructed for comparing the efficiency of heuristic and optimization approaches on large-scale rehabilitation programs. Based on extensive experiments with various case studies on different network sizes, the heuristic technique proved its practicality for handling various network sizes while maintaining the same efficiency and performance levels. The performance of the genetic algorithm optimization approach decreased with network size and model complexity. The optimization technique can provide a high performance level, given enough processing time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lin, Chia-Feng, und 林家鋒. „The studies for large scale information collection and distribution in cloud infrastructure“. Thesis, 2017. http://ndltd.ncl.edu.tw/handle/zqvusz.

Der volle Inhalt der Quelle
Annotation:
博士
國立交通大學
資訊科學與工程研究所
105
In order to be able to share network resource amount via different sources, the Web Service Description Language (WSDL) was proposed. It provides an easy-to-understand interface for information exchange over Internet. Afterward, the application innovators can mash up different Web Services together to create new service models. This is the origin of Web Service composition. This dissertation will be discussing which features are worth exploring when the scale of information system who composed from Web Services got too large. Under this topic, several characteristics become significant, such as how to friendly aggregate and spread information as well as how to manipulate large scale data set; More specifically, this study will be exploring the design of cloud-based infrastructure for large scale information collection and distribution. In addition, the video surveillance field is a quite suitable example for the proposed problem. Since the use of information aggregation, the number of simultaneously deployed terminal devices might exceed thousands of units. On the other hand, for information distribution, the cameras which are deployed in hotspot area might have large number of people watching at the same time. Besides, there are commonly used device interoperability Web Service protocol in surveillance industry, but lacking large scale manage protocol such as management interface abstraction and fault-tolerance control between multi recording devices. Hence, this study also takes the video surveillance application as an example to design an appropriate Web Service aggregation interface for unified access entrance and fault-tolerance functionality. The proposed Web Service complement the shortcomings of previous research and existing industry standards
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Adeleke, Oluwalani Aeoluwa. „A metadata service for an infrastructure of large scale distributed scientific datasets“. Thesis, 2014.

Den vollen Inhalt der Quelle finden
Annotation:
In this constantly growing information technology driven era, data migration and replication pose a serious bottleneck in the distributed database infrastructure envi- ronment. For large heterogeneous environments with domains such as geospatial sci- ence and high energy physics, where large array of scienti c data are involved, diverse challenges are encountered with respect to dataset identi cation, location services, and e cient retrieval of information. These challenges include locating data sources, identifying e ective transfer route, and replication, just to mention a few. As dis- tributed systems aimed at constant delivery of data to the point of query origination continue to expand in size and functionality, e cient replication and data retrieval systems have subsequently become increasingly important and relevant. One such system is an infrastructure for large scale distributed scienti c data management. Several data management systems have been developed to help manage these fast growing datasets and their metadata. However little work has been done on allowing cross-communication and data-sharing between these di erent dataset management systems in a distributed, heterogeneous environment. This dissertation addresses this problem, focusing particularly on metadata and provenance service associated with it. We present the Virtual Uni ed Metadata architecture to establish communication between remote sites within a distributed heterogeneous environment using a client-server model. The system provides a frame- work that allows heterogeneous metadata services communicate and share metadata and datasets through the implementation of a communication interface. It allows for metadata discovery and dataset identi cation by enabling remote query between heterogeneous metadata repositories. The signi cant contributions of this system include: { the design and implementation of a client/server based remote metadata query system for scienti c datasets within distributed heterogeneous dataset reposito- ries; { Implementation of a caching mechanism for optimizing the system performance; { Analyzing the quality of service with respect to correct dataset identi cation, estimation of migration and replication time frame, and cache performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Chang, Ching-Lien, und 張敬廉. „Applications of Module-Based Networks for Public Agencies Administrating Large-Scale Infrastructure Projects“. Thesis, 2001. http://ndltd.ncl.edu.tw/handle/96274200893173082049.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
土木工程系
89
Large-Scale Infrastructure Projects such as expressways often distributed over several regions and need to be broken down into several tendering packages. In order to meet the publicly announced project completion date, a good practice of construction schedule administration is mandatory. To the public agencies that administrate such projects, they will be managing project packages that need to be integrated as a whole. The project packages are very similar in work nature and will be carried out by different contractors with different scheduling practices. Standardization provides a foundation for more efficient and effective schedule integration. However, it is hard to implement considering the number of A/Es and contractors involved. This research proposed a three-stage standardization implementation framework that utilizes the concept of modularization. A set of activity network modules was developed for expressway projects that covered construction of roads, bridges, and tunnels. Two computer systems were also developed to help contractors to use these modules to create the primary part of their schedules, and to help the owner or A/E review contractors’ submitted schedules. The modularization approach with appropriate facilitating software tools also appeared to increase contractors’ level of acceptance to standardization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Ngole, Etonde E. „Simulation and visualization of large scale distributed health system infrastructure of developing countries“. Thesis, 2014. http://hdl.handle.net/10539/15515.

Der volle Inhalt der Quelle
Annotation:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014.
Developing countries are faced with a number of health-care challenges: long waiting hours of patients in long queues is just one of such challenges. The key cause of this has been identi ed to be a lack, or uneven distribution human resources among health facilities. This sets the stage for poor and ine cient delivery of quality primary health care, especially to the rural dweller as they usually have a fewer medical professionals in their area. The impact of this a ects not only the state of health of the population, but also the economy, and population growth of the a ected community. To try and address this, the introduction of Information Technology (IT) into health-care has been suggested by many health governing bodies like theWorld Health Organization (WHO) and other authorities in health care. The ability of IT to go beyond physical boarders and extend professional care has been the key characteristic that supports its integration into health-care. This has eventually lead to the development of Health Information Systems (HIS) that support remote consultation. Despite all these innovations, there is still evidence of poor and ine cient delivery of services at health facilities in many developing countries. We propose a completely di erent approach of addressing the problem of poor and ine cient delivery of health-care services. The key challenge we address is that of lengthy queues and long waiting hours of patients in health facilities. To cut down on the use of nancial resources (whose lack or shortage is a major challenge in developing economies), we propose an approach that focuses on the routing of patients within and between health facilities. The hypothesis for this study is based on a suggestion that alterations to the routing of patients would have an e ect on the identi ed challenges we seek to address in this study. To support this claim, a simulator of the health system was built using the OMNET++ simulation package. Analysis of test-runs for di erent scenarios were then tested and the simulation results were compared against controls to validate the functioning of the simulator. Upon validation of the simulator, it was then used to test the hypothesis. With data from the di erent health-care facilities used as input parameters to the simulator, various simulation runs were executed to mimic di erent routing scenarios. Results from the di erent simulation runs were then analyzed. The results from the simulator and analysis of these results revealed that: In a case where patients where not given the liberty to consult with a doctor of their choice but rather to consult with the next available doctor/specialist, the average time spent by patients dropped by 26%. The analysis also revealed that moving a receptionist from the rst stage upon patient entry into the health facility reduced the average patient life time by 85%. This was found to be a consequence of a drop in queue length (a 28% drop in queue length). On the other hand, the analysis also revealed that the total removal of a general receptionist increased patient life-time in a facility by 30.19%. This study also revealed that if specialists were deployed to certain health facilities rather than having referred patients come to them in the urban health facilities, patient population in the urban health centers will drop by 32%. This also saw a drop in patient waiting time in the rural health centers as more doctors were available (a reduced patient-to-doctor ratio in rural health facilities). The results from the analysis support our hypothesis and revealed that indeed, alterations to the way patients are routed does have an e ect on the queue lengths and total waiting time of patients in the health system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

„Self-governance From Above: Principles of Polycentric Governance in Large-Scale Water Infrastructure“. Doctoral diss., 2020. http://hdl.handle.net/2286/R.I.63081.

Der volle Inhalt der Quelle
Annotation:
abstract: Governance of complex social-ecological systems is partly characterized by processes of autonomous decision making and voluntary mutual adjustment by multiple authorities with overlapping jurisdictions. From a policy perspective, understanding these polycentric processes could provide valuable insight for solving environmental problems. Paradoxically, however, polycentric governance theory seems to proscribe conventional policy applications: the logic of polycentricity cautions against prescriptive, top-down interventions. Water resources governance, and large-scale water infrastructure systems in particular, offer a paradigm for interpretation of what Vincent Ostrom called the “counterintentional and counterintuitive patterns” of polycentricity. Nearly a century of philosophical inquiry and a generation of governance research into polycentricity, and the overarching institutional frameworks within which polycentric processes operate, provide context for this study. Based on a historically- and theoretically-grounded understanding of water systems as a polycentric paradigm, I argue for a realist approach to operationalizing principles of polycentricity for contribution to policy discourses. Specifically, this requires an actor-centered approach that mobilizes subjective experiences, knowledge, and narratives about contingent decision making. I use the case of large-scale water infrastructure in Arizona to explore a novel approach to measurement of polycentric decision making contexts. Through semi-structured interviews with water operators in the Arizona water system, this research explores how qualitative and quantitative comparisons can be made between polycentric governance constructs as they are understood by institutional scholars, experienced by actors in polycentric systems, and represented in public policy discourses. I introduce several measures of conditions of polycentricity at a subjective level, including the extents to which actors: experience variety in the work assigned to them; define strong operational priorities; perceive their priorities to be shared by others; identify discrete, critical decisions in the course of their work responsibilities; recall information and action dependencies in their decision making processes; relate communicating their decisions to other dependent decision makers; describe constraints in their process; and evaluate their own independence to make decisions. I use configurational analysis and narrative analysis to show how decision making and governance are understood by operators within the Arizona water system. These results contribute to practical approaches for diagnosis of polycentric systems and theory-building in self governance.
Dissertation/Thesis
Doctoral Dissertation Environmental Social Science 2020
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Paciulli, Melissa. „Developing an Evaluation Approach to Assess Large Scale Its Infrastructure Improvements: I-91 Project“. 2009. https://scholarworks.umass.edu/theses/362.

Der volle Inhalt der Quelle
Annotation:
Intelligent Transportation Systems (ITS) can include multiple technologies and applications combined to improve the overall efficiency and effectiveness of the transportation system or network. These applications are deployed with the anticipation that the desired project goals and objectives established by multiple stakeholders will be achieved. Once a system is deployed, the project goals and objectives should be evaluated. The evaluation can provide both quantitative and qualitative feedback to assess the impacts associated with the investment in building, designing and implementing these systems. This research includes a methodology to evaluate large scale ITS infrastructure projects using the Interstate 91 (I-91) ITS Project as a case study. The methodology developed includes a review of literature, a clear definition of project goals, objectives and intended outcomes, the development of hypotheses for project outcomes, specific measures of effectiveness, pre and post-data collection methods and criterion to measure the success rate of achieving the intended objective. The following recommendations should be considered by the I-91 ITS Project Team as next steps in conducting an ITS evaluation; identify and prioritize the goals and objective areas, develop a multi-phase evaluation approach, identify existing data sources of pre-deployment data, identify missing data requirements and document the existing communication protocol prior to deployment. Such a large scale evaluation requires an extensive level of effort, and priority should be given to developing a multi-phase approach. This research may be also used towards the development of an Evaluation Plan which is recommended as a component of the six step process outlined in the Evaluation Guidelines, from the United States Department of Transportation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Thwala, Wellington Didibhuku. „A critical evaluation of pre- and post- 1994 large-scale development programmes in South africa with particular focus on employment creation“. Thesis, 2010. http://hdl.handle.net/10539/8736.

Der volle Inhalt der Quelle
Annotation:
In South Africa, the levels of unemployment and poverty are extremely high and these are two of South Africa’s most pressing problems. Over the past 28 years several major programmes have been initiated in South Africa to counter unemployment and poverty. Between 1980 and 1994, the former government spent billions of Rands on large-scale development programmes with the stated objective of using labour-intensive methods during the provision of physical infrastructure, to create employment and alleviate poverty. However, this did not solve the unemployment problem. Since 1994 the African National Congress (ANC) government has implemented large-scale programmes with similar objectives to those before 1994. After an analysis of the theoretical premises and implementation of labour-intensive public works programmes in Africa, the thesis critically evaluated several pre - and post - 1994 large-scale development programmes in South Africa. Major conclusions are that very little sustainable employment was created and there was no long-term programme approach to address poverty alleviation. Furthermore, lessons that could have been learnt from pre – 1994 have not been applied in the post 1994 period. Shortcomings in programme planning and implementation of large-scale development programmes in South Africa still exist. Another major conclusion is that the pre-1994 lessons were not taken into considerations in the post-1994 programme planning and implementation. Based on the research, the author has derived a six phase Programme Management Framework for Development Programmes. This framework embodies a long-term programme management approach to the planning and implementation of large-scale, labour-intensive development programmes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

林佑庭. „A Study on Large-scale Infrastructure BOT Project Financing—Exemplified by Taiwan High Speed Rail Project“. Thesis, 2009. http://ndltd.ncl.edu.tw/handle/09295572205373541235.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

駱東明. „The Study for the Landscape Ecology Affected Large-Scale Infrastructure--A case of Yunlin University of Science and Technology“. Thesis, 2004. http://ndltd.ncl.edu.tw/handle/10605502122771994050.

Der volle Inhalt der Quelle
Annotation:
碩士
國立彰化師範大學
地理學系
92
The urban areas have become the important developmental space of economic and social activities with the times. The central city of the region often broadens into the border area because some measures taken just like the setting of Infrastructure (e.g. hospitals, government machinery buildings, parks, schools etc.), new roads, routes of the urban planning and so on, which push forward the use of the land around, change the urban landscape anew and adjust the urban ecology system again. This study tries to analyze how the setting of Infrastructure affects the space structure by the theory of landscape ecology. Meanwhile it also held a deep interview with the residents and experts and invited them to reply the questionnaires. At last, it analyzes and induces the results so as to be a basis that is relative to the future urban planning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lee, Pu, und 李樸. „A Streamlined Bug Report Mechanism for Large-scale IT Infrastructure based on Multi-source Log Aggregation and Open Data“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/9g5aqu.

Der volle Inhalt der Quelle
Annotation:
碩士
國立交通大學
資訊科學與工程研究所
106
Today, more and more servers installed into data/compute cventer, it’s become more important for IT maintainer to collect and analysis log. Not only the basic system running log (syslog), but also logs generated in application level. Those logs can help we address the real problem cause the event. In this paper, we are trying to build a streamlined bug report mechanism for large-scale IT infrastructure based on multi-source log aggregation, and provide open data base on it. We deploy Elastic Stack on Computer Center of CS of NCTU to implement a stable log collecting, analyzing platform. And finally build an Open Data system for third party study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

MONTECCHI, LEONARDO. „A Methodology and Framework for Model-Driven Dependability Analysis of Critical Embedded Systems and Directions Towards Systems of Systems“. Doctoral thesis, 2013. http://hdl.handle.net/2158/851697.

Der volle Inhalt der Quelle
Annotation:
In different domains, engineers have long used models to assess the feasibility of system designs; over other evaluation techniques modeling has the key advantage of not exercising a real instance of the system, which may be costly, dangerous, or simply unfeasible (e.g., if the system is still under design). In the development of critical systems, modeling is most often employed as a fault forecasting technique, since it can be used to estimate the degree to which a given design provides the required dependability attributes, i.e., to perform quantitative dependability analysis. More in general, models are employed in the evaluation of the Quality of Service (QoS) provided by the system, under the form of dependability, performance, or performability metrics. From an industrial perspective, modeling is also a valuable tool in the Verification & Validation (V&V) process, either as a support to the process itself (e.g., FTA), or as a means to verify specific quantitative or qualitative requirements. Modern computing systems have become very different from what they used to be in the past: their scale is growing, they are becoming massively distributed, interconnected, and evolving. Moreover, a shift towards the use of off-the-shelf components is becoming evident in several domains. Such increase in complexity makes model-based assessment a difficult and time-consuming task. In the last years, the development of system has increasingly adopted the Component-Based Development (CBD) and Model-Driven Engineering (MDE) philosophies as a way to reduce the complexity in system design and evaluation. CBD refers to the established practice of building a system out of reusable “black-box” components, while MDE refers to the systematic use of models as primary artefacts throughout the engineering lifecycle. Engineering languages like UML, BPEL, AADL, etc., allow not only a reasonable unambiguous specification of designs, but also serve as the input for subsequent development steps like code generation, formal verification, and testing. One of the core technologies supporting model-driven engineering is model transformation. Transformations can be used to refine models, apply design patterns, and project design models to various mathematical analysis domains in a precise and automated way. In recent years, model-driven engineering approaches have been also extensively used for the analysis of the extra-functional properties of the systems. To this purpose, language extensions were introduced and utilized to capture the required extra-functional concerns. Despite several approaches propose model transformations for dependability analysis, still there is not a standard approach for performing dependability analysis in a MDE environment. Indeed, when targeting critical embedded systems, the lack of support for dependability attributes, and extra-functional attributes in general, is one of the most recognized weaknesses of UML-based languages. Also, most of the approaches have been defined as extensions to a "general" system development process, often leaving the actual process unspecified. Similarly, supporting tools are typically detached from the design environment, and assume to receive as input a model satisfying certain constraints. While in principle such approach allows not to be bound to specific development methodologies, in practice it introduces a gap between the design of the functional system model, its enrichment with dependability information, and the subsequent analysis. Finally, the specification of properties our of components' context, which typically holds for functional properties, is much less understood for non-functional properties. The work in this thesis elaborates on the combined application of the CBD and MDE philosophies and technologies, with the aim to automate dependability analysis of modern computing systems. A considerable part of the work described in this thesis has been carried out in the context of the ARTEMIS-JU “CHESS” project, which aimed at defining, developing and assessing a methodology for the component-based design and development of embedded systems, using model-driven engineering techniques. The work in this thesis defines and realizes an extension to the CHESS framework for the automated evaluation of quantitative dependability properties. The extension constitutes of: i) a set of UML language extensions, collectively referred to as DEP-UML, for modeling dependability properties relevant for quantitative analysis; ii) a set of model-transformation rules for the automated generation of Stochastic Petri Nets (SPNs) models from system designs enriched with DEP-UML; and iii) a model-transformation tool, realized as a plugin for the Eclipse platform, concretely implementing the approach. After introducing the approach, we detail its application with two case studies. While for embedded systems it is often possible, or even mandatory, to follow and control the whole design and development process, the same does not hold for other classes of systems and infrastructures. In particular, large-scale complex systems don’t fit well in the paradigm proposed by the CHESS project, and alternative approaches are therefore needed. Following this observation, we then elaborate on a workflow for applying MDE approaches to support the modeling of large-scale complex systems. The workflow is based on a particular modeling technique, and a supporting domain-specific language, TMDL, which is defined in this thesis. After introducing a motivating example, the thesis details the workflow, introduces the TMDL language, describes a prototype realization of the approach, and describes the application of the approach to two examples. We then conclude with a discussion and a future view on how the contribution of this thesis can be extended to a comprehensive approach for dependability and performability evaluation in a "System of Systems" context. More in detail, this dissertation is organized as follows. Chapter 1 introduces the context of the work, describing the main concepts related to dependability, and dependability evaluation, with a focus on model-based assessment. The foundation of CBD and MDE approaches, the role of the UML language, and main related work are instead discussed in Chapter 2. Chapter 3 describes the CHESS project, and introduces the language extensions that have been defined to support dependability analysis. Moreover, the chapter details the entire process that drove us to such extensions, including the elicitation of language requirements and the evaluation of existing languages in the literature. The model-transformation algorithms for the generation of Stochastic Petri Nets are described in Chapter 4, while the adopted architecture for the concrete realization of the analysis plugin is described in Chapter 5. Chapter 6 describes the application of our approach to two case studies: of a multimedia processing workstation and a fire detection system. The need for a complementary approach for the evaluation of large-scale complex system is discussed in Chapter 7, with the aid of a motivating example of a distributed multimedia application. Chapter 8 describes our approach for the automated assembly of large dependability models through model-transformation. The thesis then concludes with an outlook on the relevance of the work presented in this thesis towards a System of Systems approach to the evaluation of large-scale complex systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie