Rozprawy doktorskie na temat „Large Scale Systems”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Large Scale Systems.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Large Scale Systems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Nandy, Sagnik. "Large scale autonomous computing systems". Diss., Connected to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3190006.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--University of California, San Diego, 2005.
Title from first page of PDF file (viewed March 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 120-128).
Style APA, Harvard, Vancouver, ISO itp.
2

Aga, Svein. "System Recovery in Large-Scale Distributed Storage Systems". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9724.

Pełny tekst źródła
Streszczenie:

This report aims to describe and improve a system recovery process in large-scale storage systems. Inevitable, a recovery process results in the system being loaded with internal replication of data, and will extensively utilize several storage nodes. Such internal load can be categorized and generalized into a maintenance workload class. Obviously, a storage system will have external clients which also introduce load into the system. This can be users altering their data, uploading new content, etc. Load generated by clients can be generalized into a production workload class. When both workload classes are actively present in a system, i.e. the system is recovering while users are simultaneously accessing their data, there will be a competition of system resources between the different workload classes. The storage must ensure Quality of Service (QoS) for each workload class so that both are guaranteed system resources. We have created Dynamic Tree with Observed Metrics (DTOM), an algorithm designed to gracefully throttle resources between multiple different workload classes. DTOM can be used to enforce and ensure QoS for the variety of workloads in a system. Experimental results demonstrate that DTOM outperforms another well-known scheduling algorithm. In addition, we have designed a recovery model which aims to improve handling of critical maintenance workload. Although the model is intentionally intended for system recovery, it can also be applied to many other contexts.

Style APA, Harvard, Vancouver, ISO itp.
3

El-Makadema, Ahmed Talal. "Large scale broadband antenna array systems". Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/large-scale-broadband-antenna-array-systems(d2586bcf-4d2f-4046-98bf-90860b52565b).html.

Pełny tekst źródła
Streszczenie:
Broadband antenna arrays have become increasingly popular for various imaging applications, such as radio telescopes and radar, where high sensitivity and resolution are required. High sensitivity requires the development of large scale broadband arrays capable of imaging distant sources at many different wavelengths, in addition to overcoming noise and jamming signals. The design of large scale broadband antenna arrays requires large number antennas, increasing the cost and complexity of the overall system. Moreover, noise sources often vary, depending on their wavelengths and angular locations. This increases the overall design complexity particularly for broadband applications where the performance depends not only on the required bandwidth, but also on the frequency band.This thesis provides a study of broadband antenna array systems for large scale applications. The study investigates different tradeoffs associated with designing such systems and drives a novel design approach to optimize both their cost and performance for a wide range of applications. In addition, the thesis includes measurements of a suitable array to validate the computational predictions. Moreover, the thesis also demonstrates how this study can be utilized to optimize a broadband antenna array system suitable for a low frequency radio telescope.
Style APA, Harvard, Vancouver, ISO itp.
4

Sales, Pardo Marta. "Large Scale Excitations in Disordered Systems". Doctoral thesis, Universitat de Barcelona, 2002. http://hdl.handle.net/10803/1786.

Pełny tekst źródła
Streszczenie:
Disorder is present in many systems in nature and in many different versions. For instance, the dislocations of a crystal lattice, or the randomness of the interaction between magnetic moments. One of the most studied examples is that of spin glasses because they are simple to model but keep most of the very complex features that many disordered systems have. The frustration of the ground state configuration is responsible for the existence of a gap less spectrum of excitations and a rugged and complex free-energy landscape which bring about a very slow relaxation towards the equilibrium state. The main concern of the thesis has been to study what the properties of the typical excitation, i.e. those excitations that are large and contribute dominantly to the physics in the frozen phase.
The existence of these large excitations brings about large fluctuations of the order parameter, and we have shown in these theses that this feature can be exploited to study the transition of any spin glass model. Moreover, we have shown that the information about these excitations can be extracted from the statistics of the lowest lying excitations. This is because due to the random nature of spin glasses, the physics obtained from averaging over the whole spectrum of excitations of an infinite sample is equivalent to averaging over many finite systems where only the ground state and the first excitation are considered. The novelty of this approach is that we do not need to make any assumption on what are typical excitations like because we can compute them exactly using numerical methods. Finally, we have investigated the dynamics and more specifically the link between the problem of chaos and the rejuvenation phenomena observed experimentally. Rejuvenation means that when lowering the temperature the aging process restarts again from scratch. This is potentially linked with the chaos assumption which states that equilibrium configurations at two different properties are not correlated. Chaos is a large scale phenomenon possible if entropy fluctuations are large. However, in this thesis we have shown that the response to temperature changes can be large in the absence of chaos close to a localization transition where the Boltzmann weight condenses in a few states. This has been observed in simulation of the Sinai model in which this localization is realized dynamically. In this model, since at low temperatures the system gets trapped in the very deep states, the dynamics is only local, so that only small excitations contribute to the rejuvenation signal that we have been able to observe. Thus, in agreement with the hierarchical picture, rejuvenation is possible even in the absence of chaos and reflects the start of the aging process of small length scales.
Style APA, Harvard, Vancouver, ISO itp.
5

Djurfeldt, Mikael. "Large-scale simulation of neuronal systems". Doctoral thesis, KTH, Beräkningsbiologi, CB, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10616.

Pełny tekst źródła
Streszczenie:
Biologically detailed computational models of large-scale neuronal networks have now become feasible due to the development of increasingly powerful massively parallel supercomputers. We report here about the methodology involved in simulation of very large neuronal networks. Using conductance-based multicompartmental model neurons based on Hodgkin-Huxley formalism, we simulate a neuronal network model of layers II/III of the neocortex. These simulations, the largest of this type ever performed, were made on the Blue Gene/L supercomputer and comprised up to 8 million neurons and 4 billion synapses. Such model sizes correspond to the cortex of a small mammal. After a series of optimization steps, performance measurements show linear scaling behavior both on the Blue Gene/L supercomputer and on a more conventional cluster computer. Results from the simulation of a model based on more abstract formalism, and of considerably larger size, also shows linear scaling behavior on both computer architectures.

QC 20100722

Style APA, Harvard, Vancouver, ISO itp.
6

D'Arcy, Francis Gerard. "State estimation for large-scale systems". Thesis, Queen's University Belfast, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287436.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Largillier, Thomas. "Probabilistic algorithms for large scale systems". Paris 11, 2010. http://www.theses.fr/2010PA112348.

Pełny tekst źródła
Streszczenie:
De nos jours, les systèmes informatiques ont une taille sans cesse croissante pour répondre aux besoins des utilisateurs. Que ce soit dans le domaine du calcul scientifique où de plus en plus d’ordinateurs sont reliés pour répondre à des problèmes sans cesse plus complexes ou dans le domaine du loisir avec un internet grandissant pour satisfaire toujours plus la curiosité des utilisateurs. Les défis qui concernent les réseaux à grande échelle sont nombreux : pouvoir garantir aux utilisateurs d’un cluster que leur calcul arrivera à terme et sans erreur dans un temps raisonnable, distribuer des données entre petites entités intelligentes efficacement ou encore protéger le web contre les tricheurs. Au cours de cette thèse, j'ai participé à l'élaboration de mécanismes de lutte contre le webspam et le spam social reposant sur l'identification de communautés dans un graphe. J'ai également participé au développement d'une plateforme de test pour applications massivement parallèles ainsi qu'à l'élaboration d'un protocole de dissémination de messages dans les réseaux de capteurs mobiles
Nowadays, information systems are getting bigger and bigger to remain able to manage users requirements. In scientific computing, networks are compound of more and more computers to solve more complex and bigger instances of problems, the internet is also increasing to satisfy the curiosity of all users and cover and increasing number of topics. The challenges regarding large-scale systems are numerous: guaranteeing clusters' users that their computation will finish in a reasonable time without errors, efficiently distributing data between small intelligent entities or protecting the web against malicious users. During this thesis, I participated in the design of mechanisms fighting webspam and social spam based on the identification of communities in large graphs. I also participated in the development of a testbed for massively parallel applications and in the design of a data dissemination protocol in wireless sensor networks
Style APA, Harvard, Vancouver, ISO itp.
8

Ali, Asim. "Robustness in large scale distributed systems". Paris 11, 2010. http://www.theses.fr/2010PA112097.

Pełny tekst źródła
Streszczenie:
Au cours de la dernière décennie, l'informatique et les technologies de la communication ont effectué une croissance exponentielle tant au niveau matériel que logiciel. La conséquence directe de cette croissance est l'émergence à l'échelle mondiale des systèmes distribués tels que, les systèmes de diffusion de l'information, les réseaux cellulaires, l'informatique distante, etc. L'intégration des dispositifs de détection (ou capteurs) avec les réseaux a contribué au développement de systèmes intelligents qui sont plus interactifs, plus dynamiques et adaptables à l’environnement d’exécution. Les applications futures qui sont envisagées sont entièrement décentralisées et auto-gérées. Ces systèmes à grande échelle sont difficiles à concevoir, développer et maintenir en raison de nombreuses contraintes comme l'hétérogénéité des ressources, la diversité des environnements de travail, les communications peu fiables, etc. Réseaux de capteurs sans fil (WSN) et grilles de calcul sont deux exemples importants de ces systèmes à grande échelle. Les propriétés essentielles des protocoles s’exécutant sur ces réseaux sont le passage à l’échelle, l’auto-gestion et la tolérance aux pannes. Ces trois aspects sont au centre de cette thèse. Dans cette thèse, nous contribuons à ce domaine de trois manières. D'abord, nous proposons et évaluons un protocole évolutif de gestion d’annuaire pour des systèmes distribués généraux où le temps de latence des mises à jour est indépendante de la taille du système. Dans notre deuxième contribution, nous concevons et mettons en œuvre une version évolutive et distribuée d'un simulateur de réseau sans fil existant: WSNet. Nous proposons un simulateur parallèle, XS-WSNet, et l’évaluons sur Grid5000 pour un passage à l’échelle extrême en simulation de réseaux de capteurs. Notre troisième contribution est l'élaboration d'un mécanisme d'étalonnage des performances de fiabilité des protocoles pour les WSN en présence de pannes ou d’environnements hostiles. Notre outil permet à l'utilisateur de simuler des milieux naturels dangereux pour les WSN, comme les conditions climatiques difficiles ainsi que de simuler des attaques dynamiques sur le réseau sans fil
During the last decade, computing and communication technologies observed exponential growth both in hardware and software. The direct result of this growth is the emergence of global scale distributed systems like, information diffusion systems, cellular networks, remote computing, etc. Integration of sensor devices with networks helped to develop smart systems that are more interactive, dynamic and adaptable to the running environment. Future applications are envisioned as completely decentralized self-managing massive distributed systems running in smart environments on top of Internet or grid infrastructure. Such large-scale systems are difficult to design, develop and maintain due to many constraints like heterogeneity of resources, diverse working environments, unreliable communications, etc. Wireless sensor networks and computational grids are two important examples of such large-scale systems. Most desirable properties of the protocols for these networks include scalability, self-management, and fault tolerance. These are the three main areas this thesis focuses on. In this thesis we contribute to this domain in three ways. First we propose and evaluate a scalable directory management protocol for general distributed systems where update latency time is independent of the system size. In our second contribution we design and implement a scalable distributed version of an existing wireless network simulator: WSNet. We run our parallel simulator, XS-WSNet, on Grid5000 and achieve extreme simulation scalability. Our third contribution is the development of a dependability benchmarking mechanism for testing WSN protocols against fault and adversarial environments. Our tool allows the user to simulate natural faulty environments for WSN, like harsh weather conditions as well as to simulate dynamic attacks to the wireless network
Style APA, Harvard, Vancouver, ISO itp.
9

Martin, Philippe J. F. "Large scale C3 systems : experiment design and system improvement". Thesis, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/15061.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
Includes bibliographical references (p. 105-106).
Research supported by the Joint Directors of Laboratories through the Office of Naval Research. N00014-85-K-0782
Philippe J. F. Martin.
M.S.
Style APA, Harvard, Vancouver, ISO itp.
10

Westfelt, Vidar, i Arturas Aleksandrauskas. "Automated migration of large-scale build systems". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157770.

Pełny tekst źródła
Streszczenie:
Upgrading or migrating a build system can be a daunting task. Complete build system migration requires significant effort. To make the process more effective, we automated the first steps of migration, and attempted to analyze the new build results to find anomalies. Our findings show promise for automation as a first step of migration, and we see that automated evaluation could have some potential.
Style APA, Harvard, Vancouver, ISO itp.
11

Henneman, Richard Lewis. "Human problem solving in complex hierarchical large scale systems". Diss., Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/25432.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Resch, Matthias. "Large scale battery systems in distribution grids". Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/665519.

Pełny tekst źródła
Streszczenie:
The increasing integration of renewable energy installations at the distribution grid level has led to a strong increase in grid reinforcement measures in recent years. Since the costs are being passed on to the general public via the grid user charges, it is necessary to investigate and evaluate alternatives. As part of the project, that is investigated in this thesis, a large-scale vanadium redox flow battery storage system was integrated into the power grid of a German distribution grid operator for the first time. The battery system is a prototype and its inverter and battery have been developed specifically for the analysed project. The main objective of the project was to quantify the extent to which grid expansion measures can be avoided by the use of batteries and to what extent the balancing act between an economic and grid supportive operation is possible. Finally, the battery application was compared technically and economically with other flexibility options for a pilot region. A preliminary analysis of possible business cases for large battery systems shows that the application of batteries in the primary control power market is by far the most lucrative application in the current German framework. It is followed by an application for cost reduction where self-consumption of PV power is favoured over grid power. Both business cases are analysed in further detail. The thesis is mainly focused on the grid supportive primary control application. The grid supportive behaviour of the analysed battery has been ensured by regulating the voltage in the low voltage grid via a reactive power control and thus increasing the grid capacity. The developed battery system was tested in the field during a one-year field test. The battery prototype and the grid of the pilot region was modelled based on measurement data. Furthermore, a method to derive an optimal operating strategy for electricity storage was developed and implemented. The strategy was developed with the aim to identify a self-sufficient operation mode, which ensured the highest possible profit and validated in a field test. Albeit being the most lucrative battery application in Germany today, economic calculations have shown that the average cost of vanadium redox batteries would have to fall by about 60% to achieve profitable operation. Nonetheless, since this is a new technology, both the expectations and potential for cost reduction are high. The second most promising application, the maximization of self-consumption, is also analysed through the means of a simulation for the pilot region, but without a implementation in the field. For this purpose, a battery model for a vanadium redox flow battery based on measurement data is applied. To ensure grid supportive behaviour, an autonomous reactive power control based on a Q(V)-characteristic and peak shaving is implemented. The technical and economic assessment of this operation strategy is compared with a lithium-ion battery providing the same service. It is shown that this business case could already be profitable, with a more favorable legal framework in place. However, at present the investment costs of the vanadium redox flow battery has to fall by at least 77% to break even for this operation strategy. Nonetheless, it could be demonstrated that it has almost no negative economic impacts if the battery storage system is operated in a grid supportive way in addition to its primary purpose. Finally, a technical and economic assessment of the impact of the two large scale battery applications on distribution grid planning is conducted. Additional flexibility options such as a cos'φ(P) and Q(V)-control of PV systems and the use of residential storages are considered as well. For this purpose, a future PV expansion pathway was developed for the pilot region, as well as an automatic (traditional) grid expansion without flexibility option as a reference scenario. The PV expansion pathway is based on the identification of suitable roof areas for PV systems using aerial photographs. It has been shown that the hosting capacity for renewable energy installations increases in all cases compared to the scenario without flexibility options, sometimes by up to 45 %. In addition, it was found that from the perspective of grid operators it is more profitable to apply the presented flexibility option instead of a traditional grid expansion
El objetivo principal de la tesis es cuantificar hasta qué punto las medidas de expansión de la red pueden evitarse mediante el uso de baterías y en qué medida es posible el equilibrio entre una operación económica y la de apoyo a la red. Como parte del proyecto, que se investiga en esta tesis, un sistema prototipo de batería de vanadio redox flow a gran escala se integró por primera vez en la red eléctrica de un operador del sistema de distribución en Alemania. Un análisis preliminar de posibles modelos de negocio para grandes sistemas de baterías muestra que la aplicación de baterías en el mercado de energía de control primario es, con mucho, la aplicación más lucrativa en el marco alemán actual. Le sigue el uso de autoconsumo de energía fotovoltaica para reducir costes. Ambos casos comerciales se analizan en detalle. La tesis se centra en la aplicación de control primario de apoyo a la red. El apoyo de la red por parte de la batería se asegura a través de la regulación de la tensión en la red de baja tensión a través de un control de potencia reactiva. El sistema de batería desarrollado fue probado en la región piloto durante un periodo de un año. Ese prototipo y la red de la región piloto se modelaron en base a los datos medidos ese periodo. Además, se desarrolló e implementó un método para derivar una estrategia operativa óptima para el almacenamiento de electricidad. Esa estrategia tiene el objetivo de identificar un modo de operación autosuficiente que garantice el mayor beneficio posible y fue validada en una prueba en la región piloto. A pesar de ser la aplicación de baterías más lucrativa en Alemania hoy en día, los cálculos económicos han demostrado que el costo promedio de las baterías vanadio redox flow tendría que disminuir aproximadamente un 60% para lograr una operación rentable. No obstante, dado que esta es una nueva tecnología, tanto las expectativas como el potencial de reducción de costos son altos. La segunda aplicación más prometedora, la maximización del autoconsumo, también se analiza a través de una simulación para la región piloto, pero sin una implementación en el campo. Para este propósito, se aplica un modelo de batería vanadio redox flow basada en datos medidos del prototipo. Para garantizar el comportamiento de soporte de la red, se implementa un control de potencia reactiva autónomo basado en una característica Q (V) y una amortiguación de la generación máxima. La evaluación técnica y económica de esta estrategia de operación fue comparada con una batería de iones de litio que proporciona el mismo servicio. Fue demostrado que este caso comercial ya podría ser rentable, con un marco legal más favorable. Sin embargo, en la actualidad los costos de i baterías vanadio redox flow tienen que caer al menos en un 77% para obtener beneficios económicos. Sin embargo, se podía demostrar que los impactos económicos negativos son mínimos si la batería se opera en una forma de soporte de red además de su propósito principal. Finalmente, se llevó a cabo una evaluación técnica y económica del impacto de las dos aplicaciones de la batería a gran escala en la planificación de la red de distribución. Otras opciones de flexibilidad, como el control cos’φ (P) y Q (V) de los sistemas FV y el uso de baterías residenciales también fueron consideradas. Para este propósito, un escenario de expansión fotovoltaica para la región piloto fue desarrollado basado en la identificación de áreas de techo adecuadas para sistemas fotovoltaicos utilizando fotografías aéreas. Para cuantificar los costes para la expansión de la red eléctrica una metodología fue desarrollada para expandir la red de una forma automática y sistemática basado en directrices de planificación creados por los operadores del sistema de distribución en Alemania. Se demostró que desde la perspectiva de los operadores de red es más rentable aplicar la opción de flexibilidad presentada en lugar de una expansión de red tradicional.
Style APA, Harvard, Vancouver, ISO itp.
13

Mühl, Gero. "Large-scale content based publish, subscribe systems". [S.l. : s.n.], 2002. http://elib.tu-darmstadt.de/diss/000274.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Parviainen, Roland. "Large scale and mobile group communication systems". Doctoral thesis, Luleå tekniska universitet, Datavetenskap, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26688.

Pełny tekst źródła
Streszczenie:
This doctoral thesis examines different attributes of large scale group communication systems such as scalability, security and mobility by studying two different prototype systems - mIR (multicast Interactive Radio) and MES (Mobile E-meeting Services). mIR is a system for large scale real-time music distribution, designed as an interactive radio system for the Internet. MES is a collection of tools for improving the use of e-meeting and video conferencing tools in a mobile environment. The mIR prototype has been used to study scalability and security. Scalability in mIR concerns how to support as many users as possible without degrading the experience. This is achieved using IP multicast together with algorithms that limits the bandwidth usage regardless of the number of users. The work on security have focused on copy prevention through digital watermarking. By adding a unique watermark, i.e. a fingerprint, to each media copy a pirated copy can be traced back to a specific user, which can act as a deterrent. The thesis shows how we can combine the different goals of fingerprinting and IP-multicast while still maintaining the scalability features of multicast. Many issues need to be considered if e-meetings and video conferencing will become widespread and popular. Scalability and security, discussed in the first part of the thesis are two examples, and the second part of the thesis tries to address a third issue: mobility. In particular we are interested in enabling access to an e-meeting in a mobile environment, where we often have difficult conditions such as bad network connections, the user only have access to the Internet through a web browser or the available devices are small and limited. In many cases it is currently impossible to participate in an e-meeting when you're not in the office. The prototype system developed in the second part of the thesis aims to enable participation from any location and device that have some sort of Internet connection. We try to achieve this by allowing a mobile user to access an e-meeting session from a web browser or from a Java enabled mobile phone. Further, the system makes it possible to review missed events in an e-meeting as it is likely that there are many times where no Internet connection at all is available. The general style of work has been prototype driven with a goal of creating usable prototypes - i.e. the prototypes should be easy to deploy and it should be possible to use and test the prototypes daily. Most the prototypes described in this thesis have indeed been deployed and have seen daily use.

Godkänd; 2005; 20061004 (ysko)

Style APA, Harvard, Vancouver, ISO itp.
15

Al-Shishtawy, Ahmad. "Self-Management for Large-Scale Distributed Systems". Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101661.

Pełny tekst źródła
Streszczenie:
Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control.

QC 20120831

Style APA, Harvard, Vancouver, ISO itp.
16

Amadeo, Lily. "Large Scale Matrix Completion and Recommender Systems". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/1021.

Pełny tekst źródła
Streszczenie:
"The goal of this thesis is to extend the theory and practice of matrix completion algorithms, and how they can be utilized, improved, and scaled up to handle large data sets. Matrix completion involves predicting missing entries in real-world data matrices using the modeling assumption that the fully observed matrix is low-rank. Low-rank matrices appear across a broad selection of domains, and such a modeling assumption is similar in spirit to Principal Component Analysis. Our focus is on large scale problems, where the matrices have millions of rows and columns. In this thesis we provide new analysis for the convergence rates of matrix completion techniques using convex nuclear norm relaxation. In addition, we validate these results on both synthetic data and data from two real-world domains (recommender systems and Internet tomography). The results we obtain show that with an empirical, data-inspired understanding of various parameters in the algorithm, this matrix completion problem can be solved more efficiently than some previous theory suggests, and therefore can be extended to much larger problems with greater ease. "
Style APA, Harvard, Vancouver, ISO itp.
17

Goldberg, David Alan Ph D. Massachusetts Institute of Technology. "Large scale queueing systems : asymptotics and insights". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67765.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 195-203).
Parallel server queues are a family of stochastic models useful in a variety of applications, including service systems and telecommunication networks. A particular application that has received considerable attention in recent years is the analysis of call centers. A feature common to these models is the notion of the 'trade-off' between quality and efficiency. It is known that if the underlying system parameters scale together according to a certain 'square-root scaling law', then this trade-off can be precisely quantified, in which case the queue is said to be in the Halfin-Whitt regime. A common approach to understanding this trade-off involves restricting one's models to have exponentially distributed call lengths, and restricting one's analysis to the steady-state behavior of the system. However, these are considered shortcomings of much work in the area. Although several recent works have moved beyond these assumptions, many open questions remain, especially w.r.t. the interplay between the transient and steady-state properties of the relevant models. These questions are the primary focus of this thesis. In the first part of this thesis, we prove several results about the rate of convergence to steady-state for the A/M/rn queue, i.e. n-server queue with exponentially distributed inter-arrival and processing times, in the Halfini-Whitt regime. We identify the limiting rate of convergence to steady-state, discover an asymptotic phase transition that occurs w.r.t. this rate, and prove explicit bounds on the distance to stationarity. The results of the first part of this thesis represent an important step towards understanding how to incorporate transient effects into the analysis of parallel server queues. In the second part of this thesis, we prove several results regarding the steadystate G/G/n queue, i.e. n-server queue with generally distributed inter-arrival and processing times, in the Halfin-Whitt regime. We first prove that under minor technical conditions, the steady-state number of jobs waiting in queue scales like the square root of the number of servers. We then establish bounds for the large deviations behavior of this model, partially resolving a conjecture made by Gamarnik and Momcilovic in [431. We also derive bounds for a related process studied by Reed in [91]. We then derive the first qualitative insights into the steady-state probability that an arriving job must wait for service in the Halfin-Whitt regime, for generally distributed processing times. We partially characterize the behavior of this probability when a certain excess parameter B approaches either 0 or oo. We conclude by studying the large deviations of the number of idle servers, proving that this random variable has a Gaussian-like tail. We prove our main results by combining tools from the theory of stochastic comparison [99] with the theory of heavy-traffic approximations [113]. We compare the system of interest to a 'modified' queue, in which all servers are kept busy at all times by adding artificial arrivals whenever a server would otherwise go idle, and certain servers can permanently break down. We then analyze the modified system using heavy-traffic approximations. The proven bounds hold for all n, have representations as the suprema of certain natural processes, and may prove useful in a variety of settings. The results of the second part of this thesis enhance our understanding of how parallel server queues behave in heavy traffic, when processing times are generally distributed.
by David Alan Goldberg.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
18

?ftar, Altu? "Robust controller design for large scale systems /". The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487592050230031.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Wei, Chih-Ping 1965. "Schema management for large-scale multidatabase systems". Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/290610.

Pełny tekst źródła
Streszczenie:
Advances in networking and database technologies have made the concept of global information sharing possible. A rapidly growing number of applications require access to and manipulation of the data residing in multiple pre-existing database systems, which are usually autonomous and heterogeneous. A promising approach to the problems of interoperating multiple heterogeneous database systems is the construction of multidatabase systems. Among all of the research issues concerning multidatabase systems, schema management which involves with the management of various schemas at different levels in a dynamic environment has been largely overlooked in the previous research. Two most important research in schema management have been identified: schema translation and schema integration. The need for declarative and extensible approach to schema translation and the support for schema integration are accentuated in a large-scale environment. This dissertation presents a construct-equivalence-based methodology based on the implications of semantics characteristics of data models for schema translation and schema integration. The research was undertaken for the purposes of (1) overcoming the methodological inadequacies of existing schema translation approaches and the conventional schema integration process for large-scale MDBSs, (2) providing an integrated methodology for schema translation and schema normalization whose similarities of problem formulation has not been previously recognized, (3) inductively learning model schemas that provide a basis for declaratively specifying construct equivalences for schema translation and schema normalization. The methodology is based on a metamodel (Synthesized Object-Oriented Entity-Relationship (SOOER) model), an inductive metamodeling approach (Abstraction Induction Technique), a declarative construct equivalence representation (Construct Equivalence Assertion Language, CEAL), and its associated transformation and reasoning methods. The results of evaluation studies showed that Abstraction Induction Technique inductively learned satisfactory model schemas. CEAL's expressiveness and adequacy in meeting its design principles, well-defined construct equivalence transformation and reasoning methods, as well as the advantages realized by the construct-equivalence-based schema translation and schema normalization suggested that the construct-equivalence-based methodology be a promising approach for large-scale MDBSs.
Style APA, Harvard, Vancouver, ISO itp.
20

Caneill, Matthieu. "Contributions to large-scale data processing systems". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM006/document.

Pełny tekst źródła
Streszczenie:
Cette thèse couvre le sujet des systèmes de traitement de données àgrande échelle, et plus précisément trois approches complémentaires :la conception d'un système pour prédir des défaillances de serveursgrâce à l'analyse de leurs données de supervision; l'acheminement dedonnées dans un système à temps réel en étudiant les corrélationsentre les champs des messages pour favoriser la localité; etfinalement un environnement de développement innovateur pour concevoirdes transformations de donées en utilisant des graphes orientés deblocs.À travers le projet Smart Support Center, nous concevons unearchitecture qui passe à l'échelle, afin de stocker des sériestemporelles rapportées par des moteurs de supervision, qui vérifienten permanence la santé des systèmes informatiques. Nous utilisons cesdonnées pour effectuer des prédictions, et détecter de potentielsproblèmes avant qu'ils ne ne produisent.Nous nous plongeons ensuite dans les algorithmes d'acheminement pourles sytèmes de traitement de données en temps réel, et développons unecouche pour acheminer les messages plus efficacement, en évitant lesrebonds entre machines. Dans ce but, nous identifions en temps réelles corrélations qui apparaissent entre les champs de ces messages,tels les mots-clics et leur localisation géographique, par exempledans le cas de micromessages. Nous utilisons ces corrélations pourcréer des tables d'acheminement qui favorisent la colocation desacteurs traitant ces messages.Pour finir, nous présentons λ-blocks, un environnement dedéveloppement pour effectuer des tâches de transformations de donnéessans écrire de code source, mais en créant des graphes de blocs decode. L'environnement est rapide, et est distribué avec des pilesincluses: libraries de blocs, modules d'extension, et interfaces deprogrammation pour l'étendre. Il est également capable de manipulerdes graphes d'exécution, pour optimisation, analyse, vérification, outout autre but
This thesis covers the topic of large-scale data processing systems,and more precisely three complementary approaches: the design of asystem to perform prediction about computer failures through theanalysis of monitoring data; the routing of data in a real-time systemlooking at correlations between message fields to favor locality; andfinally a novel framework to design data transformations usingdirected graphs of blocks.Through the lenses of the Smart Support Center project, we design ascalable architecture, to store time series reported by monitoringengines, which constantly check the health of computer systems. We usethis data to perform predictions, and detect potential problems beforethey arise.We then dive in routing algorithms for stream processing systems, anddevelop a layer to route messages more efficiently, by avoiding hopsbetween machines. For that purpose, we identify in real-time thecorrelations which appear in the fields of these messages, such ashashtags and their geolocation, for example in the case of tweets. Weuse these correlations to create routing tables which favor theco-location of actors handling these messages.Finally, we present λ-blocks, a novel programming framework to computedata processing jobs without writing code, but rather by creatinggraphs of blocks of code. The framework is fast, and comes withbatteries included: block libraries, plugins, and APIs to extendit. It is also able to manipulate computation graphs, foroptimization, analyzis, verification, or any other purposes
Style APA, Harvard, Vancouver, ISO itp.
21

Stender, Jan. "Snapshots in large-scale distributed file systems". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16660.

Pełny tekst źródła
Streszczenie:
Viele moderne Dateisysteme unterstützen Snapshots zur Erzeugung konsistenter Online-Backups, zur Wiederherstellung verfälschter oder ungewollt geänderter Dateien, sowie zur Rückverfolgung von Änderungen an Dateien und Verzeichnissen. Während frühere Arbeiten zu Snapshots in Dateisystemen vorwiegend lokale Dateisysteme behandeln, haben moderne Trends wie Cloud- oder Cluster-Computing dazu geführt, dass die Datenhaltung in verteilten Speichersystemen an Bedeutung gewinnt. Solche Systeme umfassen häufig eine Vielzahl an Speicher-Servern, was besondere Herausforderungen mit Hinblick auf Skalierbarkeit, Verfügbarkeit und Ausfallsicherheit mit sich bringt. Diese Arbeit beschreibt einen Snapshot-Algorithmus für großangelegte verteilte Dateisysteme und dessen Integration in XtreemFS, ein skalierbares objektbasiertes Dateisystem für Grid- und Cloud-Computing-Umgebungen. Die zwei Bausteine des Algorithmus sind ein System zur effizienten Erzeugung und Verwaltung von Dateiinhalts- und Metadaten-Versionen, sowie ein skalierbares, ausfallsicheres Verfahren zur Aggregation bestimmter Versionen in einem Snapshot. Um das Problem einer fehlenden globalen Zeit zu bewältigen, implementiert der Algorithmus ein weniger restriktives, auf Zeitstempeln lose synchronisierter Server-Uhren basierendes Konsistenzmodell für Snapshots. Die wesentlichen Beiträge der Arbeit sind: 1) ein formales Modell von Snapshots und Snapshot-Konsistenz in verteilten Dateisystemen; 2) die Beschreibung effizienter Verfahren zur Verwaltung von Metadaten- und Dateiinhalts-Versionen in objektbasierten Dateisystemen; 3) die formale Darstellung eines skalierbaren, ausfallsicheren Snapshot-Algorithmus für großangelegte objektbasierte Dateisysteme; 4) eine detaillierte Beschreibung der Implementierung des Algorithmus in XtreemFS. Eine umfangreiche Auswertung belegt, dass der vorgestellte Algorithmus die Nutzerdatenrate kaum negativ beeinflusst, und dass er mit großen Zahlen an Snapshots und Versionen skaliert.
Snapshots are present in many modern file systems, where they allow to create consistent on-line backups, to roll back corruptions or inadvertent changes of files, and to keep a record of changes to files and directories. While most previous work on file system snapshots refers to local file systems, modern trends like cloud and cluster computing have shifted the focus towards distributed storage infrastructures. Such infrastructures often comprise large numbers of storage servers, which presents particular challenges in terms of scalability, availability and failure tolerance. This thesis describes snapshot algorithm for large-scale distributed file systems and its integration in XtreemFS, a scalable object-based file system for grid and cloud computing environments. The two building blocks of the algorithm are a version management scheme, which efficiently records versions of file content and metadata, as well as a scalable and failure-tolerant mechanism that aggregates specific versions in a snapshot. To overcome the lack of a global time in a distributed system, the algorithm implements a relaxed consistency model for snapshots, which is based on timestamps assigned by loosely synchronized server clocks. The main contributions of the thesis are: 1) a formal model of snapshots and snapshot consistency in distributed file systems; 2) the description of efficient schemes for the management of metadata and file content versions in object-based file systems; 3) the formal presentation of a scalable, fault-tolerant snapshot algorithm for large-scale object-based file systems; 4) a detailed description of the implementation of the algorithm as part of XtreemFS. An extensive evaluation shows that the proposed algorithm has no severe impact on user I/O, and that it scales to large numbers of snapshots and versions.
Style APA, Harvard, Vancouver, ISO itp.
22

Famoso, Carlo. "Vibrational Control of Large Scale Electromechanical Systems". Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/3860.

Pełny tekst źródła
Streszczenie:
In this thesis, our attention focuses on the fundamental role of broad-spectrum mechanical vibrations [1] to favor the working of complex electromechanical systems. Uncertainty has been already proved to allow self-organization in array of non-linear oscillator (pendulums, in particular) [2, 3]. The idea of our thesis is to show that also passive and active mechanical vibrations can play a key role on self-organization in a class of complex electromechanical systems. In fact, large-scale electromechanical systems considered in this work is referred to as imperfect uncertain systems for which classical feedback control design cannot be suitably implemented. For imperfect and uncertain systems, we mean systems including also unmodeled dynamics, intermittently arising, and uncertain parameters. In order to control arrays of such types of systems, made by coupling a large number of linear low order units, it is not convenient to consider classical control approach [4]. The strategy to control each unit with a local feedback loops, indeed, is not practical as it leads to numerous and different control actions. On the contrary, the idea of our research is to use only few control actions, in order to control the whole system, by exploiting its intrinsic properties of self-organization stimulated by the control actions. This work is organized as follows. Chapter I is about the new class of systems that are experimentally realized. Some qualitative exploration about them has been also given with reference to the control task. In the Chapter II the problem is introduced by focusing on the key points of the research and the peculiar aspects of the structures showing some introductive experimental results. In the Chapter III the mathematical framework of the general class of imperfect uncertain systems and the control feedback scheme are discussed. In the Chapter IV the models of specific investigated large scale electromechanical systems are illustrated. In the Chapter V the experimental results related to specific structures are discussed and compared with the numerical results obtained by simulations of the mathematical model. Chapter VI includes the conclusive remarks and outlines the future perspective trends towards which this research could lead.
Style APA, Harvard, Vancouver, ISO itp.
23

Aboucaya, William. "Collaborative systems for large scale citizen participation". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS461.pdf.

Pełny tekst źródła
Streszczenie:
Les plates-formes participatives en ligne sont devenues un moyen courant d'impliquer les citoyens dans la prise de décision publique, permettant une participation à plus grande échelle que leurs homologues hors ligne, à la fois en termes de nombre de participants et de répartition géographique. Cependant, le terme "plateforme participative" recouvre un large éventail de systèmes extrêmement différents, ce qui implique des différences dans les problèmes rencontrés par les administrateurs et les contributeurs des plateformes. Plus précisément, ces plateformes font face à des problèmes spécifiques lorsqu'elles visent à permettre aux citoyens de collaborer pour produire des contributions communes ou lorsque le nombre de contributeurs impliqués devient particulièrement élevé. Cette recherche doctorale vise à identifier les problèmes des plateformes de participation citoyenne contemporaines et à proposer des moyens techniques pour créer des plateformes participatives plus collaboratives et adaptées à une participation à grande échelle. Ma thèse s'appuie principalement sur des travaux antérieurs réalisés dans les champs de la recherche en informatique que sont le travail collaboratif assisté par ordinateur (TCAO) et le traitement automatique du langage naturel (TAL). Les contributions de cette thèse sont : l'identification des biais d'une plateforme participative spécifique et la recommandation d'alternatives centrées sur la conception de la plateforme pour les résoudre ; la représentation d'une plateforme participative sous la forme d'un graphe de connaissances (knowledge graph) et son enrichissement à partir d'une base de connaissances externe préexistantes ; l'identification des différents objectifs motivant la création de plateformes participatives et des différents types de fonctionnalités d'interaction mises en œuvre à partir d'une série d'entretiens ; la conception et la mise en œuvre d'une méthode basée sur l'inférence en langage naturel pour réduire les problèmes rencontrés par la participation citoyenne en ligne lorsque le nombre de contributeurs devient particulièrement élevé
Online participatory platforms have become a common means to involve citizens in public decision-making, allowing for participation at a larger scale than their offline counterparts, both in the number of participants and in the geographical distribution. However, the term "participatory platform" covers a wide range of extremely different systems, implying differences in the problems encountered by platforms administrators and contributors. More precisely, such platforms face specific issues when they aim at allowing citizens to collaborate to produce common contributions or when the number of contributors involved becomes particularly high. This Ph.D. research aims at identifying issues in contemporary online citizen participation platforms and proposing technical means to create participatory platforms more collaborative and suitable for large scale online participation. My thesis is mainly based on previous works produced in the Computer-Supported Collaborative Work (CSCW) and Natural Language Processing (NLP) fields of computer science research. The contributions of this thesis are: the identification of flaws in a specific citizen participation platform and the recommendation of platform design-oriented alternatives to solve them; the representation of a participatory platform as a knowledge graph and its enrichment using a preexisting external knowledge base; the identification of the different objectives motivating the creation of participatory platforms and of the different types of features for interaction implemented based on a series of interviews; the conception and implementation of a Natural Language Inference-based method to reduce issues faced by online citizen participation when the number of contributors becomes particularly high
Style APA, Harvard, Vancouver, ISO itp.
24

Pattnaik, Aliva. "Fault propagation analysis of large-scale, networked embedded systems". Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42918.

Pełny tekst źródła
Streszczenie:
In safety-critical, networked embedded systems, it is important that the way in which a fault(s) in one component of the system can propagate throughout the system to other components is analyzed correctly. Many real-world systems, such as modern aircrafts and automobiles, use large-scale networked embedded systems with complex behavior. In this work, we have developed techniques and a software tool, FauPA, that uses those techniques to automate fault-propagation analysis of large-scale, networked embedded systems such as those used in modern aircraft. This work makes three main contributions. 1. Fault propagation analyses. We developed algorithms for two types of analyses: forward analysis and backward analysis. For backward analysis, we developed two techniques: a naive algorithm and an algorithm that uses Datalog. 2. A system description language. We developed a language that we call Communication System Markup Language (CSML) based on XML. A system can be specified concisely and at a high-level in CSML. 3. A GUI-based display of the system and analysis results. We developed a GUI to visualize the system that is specified in CSML. The GUI also lets the user visualize the results of fault-propagation analyses.
Style APA, Harvard, Vancouver, ISO itp.
25

Tong, Choon Yin. "Architecting the safety assessment of large-scale systems integration". Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Dec/09Dec%5FTong.pdf.

Pełny tekst źródła
Streszczenie:
Thesis (M.S. in Systems Engineering and Analysis)--Naval Postgraduate School, December 2009.
Thesis Advisor(s): Paulo, Eugene. Second Reader: Rhoades, Mark. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: Systems integration, System safety, System-of-Systems safety. Includes bibliographical references (p. 51-52). Also available in print.
Style APA, Harvard, Vancouver, ISO itp.
26

Janovsky, Pavel. "Large-scale coalition formation: application in power distribution systems". Diss., Kansas State University, 2017. http://hdl.handle.net/2097/35328.

Pełny tekst źródła
Streszczenie:
Doctor of Philosophy
Department of Computing and Information Sciences
Scott A. DeLoach
Coalition formation is a key cooperative behavior of a system of multiple autonomous agents. When the capabilities of individual agents are not su fficient for the improvement of well-being of the individual agents or of the entire system, the agents can bene t by joining forces together in coalitions. Coalition formation is a technique for finding coalitions that are best fi tted to achieve individual or group goals. This is a computationally expensive task because often all combinations of agents have to be considered in order to find the best assignments of agents to coalitions. Previous research has therefore focused mainly on small-scale or otherwise restricted systems. In this thesis we study coalition formation in large-scale multi-agent systems. We propose an approach for coalition formation based on multi-agent simulation. This approach allows us to find coalitions in systems with thousands of agents. It also lets us modify behaviors of individual agents in order to better match a specific coalition formation application. Finally, our approach can consider both social welfare of the multi-agent system and well-being of individual self-interested agents. Power distribution systems are used to deliver electric energy from the transmission system to households. Because of the increased availability of distributed generation using renewable resources, push towards higher use of renewable energy, and increasing use of electric vehicles, the power distribution systems are undergoing signi ficant changes towards active consumers who participate in both supply and demand sides of the electricity market and the underlying power grid. In this thesis we address the ongoing change in power distribution systems by studying how the use of renewable energy can be increased with the help of coalition formation. We propose an approach that lets renewable generators, which face uncertainty in generation prediction, to form coalitions with energy stores, which on the other hand are always able to deliver the committed power. These coalitions help decrease the uncertainty of the power generation of renewable generators, consequently allowing the generators to increase their use of renewable energy while at the same time increasing their pro fits. Energy stores also bene t from participating in coalitions with renewable generators, because they receive payments from the generators for the availability of their power at speci fic time slots. We first study this problem assuming no physical constraints of the underlying power grid. Then we analyze how coalition formation of renewable generators and energy stores in a power grid with physical constraints impacts the state of the grid, and we propose agent behavior that leads to increase in use of renewable energy as well as maintains stability of the grid.
Style APA, Harvard, Vancouver, ISO itp.
27

Bonis, Ioannis. "Optimisation and control methodologies for large-scale and multi-scale systems". Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/optimisation-and-control-methodologies-for-largescale-and-multiscale-systems(6c4a4f13-ebae-4d9d-95b7-cca754968d47).html.

Pełny tekst źródła
Streszczenie:
Distributed parameter systems (DPS) comprise an important class of engineering systems ranging from "traditional" such as tubular reactors, to cutting edge processes such as nano-scale coatings. DPS have been studied extensively and significant advances have been noted, enabling their accurate simulation. To this end a variety of tools have been developed. However, extending these advances for systems design is not a trivial task . Rigorous design and operation policies entail systematic procedures for optimisation and control. These tasks are "upper-level" and utilize existing models and simulators. The higher the accuracy of the underlying models, the more the design procedure benefits. However, employing such models in the context of conventional algorithms may lead to inefficient formulations. The optimisation and control of DPS is a challenging task. These systems are typically discretised over a computational mesh, leading to large-scale problems. Handling the resulting large-scale systems may prove to be an intimidating task and requires special methodologies. Furthermore, it is often the case that the underlying physical phenomena span various temporal and spatial scales, thus complicating the analysis. Stiffness may also potentially be exhibited in the (nonlinear) models of such phenomena. The objective of this work is to design reliable and practical procedures for the optimisation and control of DPS. It has been observed in many systems of engineering interest that although they are described by infinite-dimensional Partial Differential Equations (PDEs) resulting in large discretisation problems, their behaviour has a finite number of significant components , as a result of their dissipative nature. This property has been exploited in various systematic model reduction techniques. Of key importance in this work is the identification of a low-dimensional dominant subspace for the system. This subspace is heuristically found to correspond to part of the eigenspectrum of the system and can therefore be identified efficiently using iterative matrix-free techniques. In this light, only low-dimensional Jacobians and Hessian matrices are involved in the formulation of the proposed algorithms, which are projections of the original matrices onto appropriate low-dimensional subspaces, computed efficiently with directional perturbations.The optimisation algorithm presented employs a 2-step projection scheme, firstly onto the dominant subspace of the system (corresponding to the right-most eigenvalues of the linearised system) and secondly onto the subspace of decision variables. This algorithm is inspired by reduced Hessian Sequential Quadratic Programming methods and therefore locates a local optimum of the nonlinear programming problem given by solving a sequence of reduced quadratic programming (QP) subproblems . This optimisation algorithm is appropriate for systems with a relatively small number of decision variables. Inequality constraints can be accommodated following a penalty-based strategy which aggregates all constraints using an appropriate function , or by employing a partial reduction technique in which only equality constraints are considered for the reduction and the inequalities are linearised and passed on to the QP subproblem . The control algorithm presented is based on the online adaptive construction of low-order linear models used in the context of a linear Model Predictive Control (MPC) algorithm , in which the discrete-time state-space model is recomputed at every sampling time in a receding horizon fashion. Successive linearisation around the current state on the closed-loop trajectory is combined with model reduction, resulting in an efficient procedure for the computation of reduced linearised models, projected onto the dominant subspace of the system. In this case, this subspace corresponds to the eigenvalues of largest magnitude of the discretised dynamical system. Control actions are computed from low-order QP problems solved efficiently online.The optimisation and control algorithms presented may employ input/output simulators (such as commercial packages) extending their use to upper-level tasks. They are also suitable for systems governed by microscopic rules, the equations of which do not exist in closed form. Illustrative case studies are presented, based on tubular reactor models, which exhibit rich parametric behaviour.
Style APA, Harvard, Vancouver, ISO itp.
28

Grass, Thomas. "Simulation methodologies for future large-scale parallel systems". Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/461198.

Pełny tekst źródła
Streszczenie:
Since the early 2000s, computer systems have seen a transition from single-core to multi-core systems. While single-core systems included only one processor core on a chip, current multi-core processors include up to tens of cores on a single chip, a trend which is likely to continue in the future. Today, multi-core processors are ubiquitous. They are used in all classes of computing systems, ranging from low-cost mobile phones to high-end High-Performance Computing (HPC) systems. Designing future multi-core systems is a major challenge [12]. The primary design tool used by computer architects in academia and industry is architectural simulation. Simulating a computer system executing a program is typically several orders of magnitude slower than running the program on a real system. Therefore, new techniques are needed to speed up simulation and allow the exploration of large design spaces in a reasonable amount of time. One way of increasing simulation speed is sampling. Sampling reduces simulation time by simulating only a representative subset of a program in detail. In this thesis, we present a workload analysis of a set of task-based programs. We then use the insights from this study to propose TaskPoint, a sampled simulation methodology for task-based programs. Task-based programming models can reduce the synchronization costs of parallel programs on multi-core systems and are becoming increasingly important. Finally, we present MUSA, a simulation methodology for simulating applications running on thousands of cores on a hybrid, distributed shared-memory system. The simulation time required for simulation with MUSA is comparable to the time needed for native execution of the simulated program on a production HPC system. The techniques developed in the scope of this thesis permit researchers and engineers working in computer architecture to simulate large workloads, which were infeasible to simulate in the past. Our work enables architectural research in the fields of future large-scale shared-memory and hybrid, distributed shared-memory systems.
Des dels principis dels anys 2000, els sistemes d'ordinadors han experimentat una transició de sistemes d'un sol nucli a sistemes de múltiples nuclis. Mentre els sistemes d'un sol nucli incloïen només un nucli en un xip, els sistemes actuals de múltiples nuclis n'inclouen desenes, una tendència que probablement continuarà en el futur. Avui en dia, els processadors de múltiples nuclis són omnipresents. Es fan servir en totes les classes de sistemes de computació, de telèfons mòbils de baix cost fins a sistemes de computació d'alt rendiment. Dissenyar els futurs sistemes de múltiples nuclis és un repte important. L'eina principal usada pels arquitectes de computadors, tant a l'acadèmia com a la indústria, és la simulació. Simular un ordinador executant un programa típicament és múltiples ordres de magnitud més lent que executar el mateix programa en un sistema real. Per tant, es necessiten noves tècniques per accelerar la simulació i permetre l'exploració de grans espais de disseny en un temps raonable. Una manera d'accelerar la velocitat de simulació és la simulació mostrejada. La simulació mostrejada redueix el temps de simulació simulant en detall només un subconjunt representatiu d¿un programa. En aquesta tesi es presenta una anàlisi de rendiment d'una col·lecció de programes basats en tasques. Com a resultat d'aquesta anàlisi, proposem TaskPoint, una metodologia de simulació mostrejada per programes basats en tasques. Els models de programació basats en tasques poden reduir els costos de sincronització de programes paral·lels executats en sistemes de múltiples nuclis i actualment estan guanyant importància. Finalment, presentem MUSA, una metodologia de simulació per simular aplicacions executant-se en milers de nuclis d'un sistema híbrid, que consisteix en nodes de memòria compartida que formen un sistema de memòria distribuïda. El temps que requereixen les simulacions amb MUSA és comparable amb el temps que triga l'execució nativa en un sistema d'alt rendiment en producció. Les tècniques desenvolupades al llarg d'aquesta tesi permeten simular execucions de programes que abans no eren viables, tant als investigadors com als enginyers que treballen en l'arquitectura de computadors. Per tant, aquest treball habilita futura recerca en el camp d'arquitectura de sistemes de memòria compartida o distribuïda, o bé de sistemes híbrids, a gran escala.
A principios de los años 2000, los sistemas de ordenadores experimentaron una transición de sistemas con un núcleo a sistemas con múltiples núcleos. Mientras los sistemas single-core incluían un sólo núcleo, los sistemas multi-core incluyen decenas de núcleos en el mismo chip, una tendencia que probablemente continuará en el futuro. Hoy en día, los procesadores multi-core son omnipresentes. Se utilizan en todas las clases de sistemas de computación, de teléfonos móviles de bajo coste hasta sistemas de alto rendimiento. Diseñar sistemas multi-core del futuro es un reto importante. La herramienta principal usada por arquitectos de computadores, tanto en la academia como en la industria, es la simulación. Simular un computador ejecutando un programa típicamente es múltiples ordenes de magnitud más lento que ejecutar el mismo programa en un sistema real. Por ese motivo se necesitan nuevas técnicas para acelerar la simulación y permitir la exploración de grandes espacios de diseño dentro de un tiempo razonable. Una manera de aumentar la velocidad de simulación es la simulación muestreada. La simulación muestreada reduce el tiempo de simulación simulando en detalle sólo un subconjunto representativo de la ejecución entera de un programa. En esta tesis presentamos un análisis de rendimiento de una colección de programas basados en tareas. Como resultado de este análisis presentamos TaskPoint, una metodología de simulación muestreada para programas basados en tareas. Los modelos de programación basados en tareas pueden reducir los costes de sincronización de programas paralelos ejecutados en sistemas multi-core y actualmente están ganando importancia. Finalmente, presentamos MUSA, una metodología para simular aplicaciones ejecutadas en miles de núcleos de un sistema híbrido, compuesto de nodos de memoria compartida que forman un sistema de memoria distribuida. El tiempo de simulación que requieren las simulaciones con MUSA es comparable con el tiempo necesario para la ejecución del programa simulado en un sistema de alto rendimiento en producción. Las técnicas desarolladas al largo de esta tesis permiten a los investigadores e ingenieros trabajando en la arquitectura de computadores simular ejecuciones largas, que antes no se podían simular. Nuestro trabajo facilita nuevos caminos de investigación en los campos de sistemas de memoria compartida o distribuida y en sistemas híbridos.
Style APA, Harvard, Vancouver, ISO itp.
29

Leung, Andrew W. "Organizing, indexing and searching large-scale file systems /". Diss., Digital Dissertations Database. Restricted to UC campuses, 2009. http://uclibs.org/PID/11984.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Reimann, Carsten. "Model-Based Monitoring in Large-Scale Distributed Systems". Master's thesis, Universitätsbibliothek Chemnitz, 2002. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200200938.

Pełny tekst źródła
Streszczenie:
Monitoring remains an important problem in computer science. This thesis describes which monitor information is needed to analyze distributed service environments. This thesis also describes how to get these information and how to store them in a monitoring database. The resulting model is used to describe a distributed media content environment and a simulation system that runs on the CLIC helps to generate measurements as in real systems
Monitoring ist ein wichtiges Problem in der Informatik. In dieser Arbeit werden die benoetigten Daten beschrieben, welche zur Analyse von verteilten Dienstumgebungen dienen. Weiterhin wird beschrieben, wie man diese Daten messen und in einer geeigneten Datenbank speichern kann. Das daraus entstehende Modell wird verwendet um eine verteilte Medien-Daten-Umgebung zu beschreiben und eine Simulation auf dem CLIC erzeugt Messdaten wie sie in realen Systemen vorkommen
Style APA, Harvard, Vancouver, ISO itp.
31

Adam, Constantin. "A Middleware for Self-Managing Large-Scale Systems". Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4178.

Pełny tekst źródła
Streszczenie:
This thesis investigates designs that enable individual components of a distributed system to work together and coordinate their actions towards a common goal. While the basic motivation for our research is to develop engineering principles for large-scale autonomous systems, we address the problem in the context of resource management in server clusters that provide web services. To this end, we have developed, implemented and evaluated a decentralized design for resource management that follows four principles. First, in order to facilitate scalability, each node has only partial knowledge of the system. Second, each node can adapt and change its role at runtime. Third, each node runs a number of local control mechanisms independently and asynchronously from its peers. Fourth, each node dynamically adapts its local configuration in order to optimize a global utility function. The design includes three fundamental building blocks: overlay construction, request routing and application placement. Overlay construction organizes the cluster nodes into a single dynamic overlay. Request routing directs service requests towards nodes with available resources. Application placement partitions the cluster resources between applications, and dynamically adjusts the allocation in response to changes in external load, node failures, etc. We have evaluated the design using complexity analysis, simulation and prototype implementation. Using complexity analysis and simulation, we have shown that the system is scalable, operates efficiently in steady state, quickly adapts to external events and allows for effective service differentiation by a system administrator. A prototype has been built using accepted technologies (Java, Tomcat) and evaluated using standard benchmarks (TPC-W and RUBiS). The evaluation results show that the behavior of the prototype matches closely that of the simulated design for key metrics related to adaptability and robustness, therefore validating our design and proving its feasibility.
QC 20100629
Style APA, Harvard, Vancouver, ISO itp.
32

Peng, Ivy Bo. "Data Movement on Emerging Large-Scale Parallel Systems". Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-218338.

Pełny tekst źródła
Streszczenie:
Large-scale HPC systems are an important driver for solving computational problems in scientific communities. Next-generation HPC systems will not only grow in scale but also in heterogeneity. This increased system complexity entails more challenges to data movement in HPC applications. Data movement on emerging HPC systems requires asynchronous fine-grained communication and efficient data placement in the main memory. This thesis proposes an innovative programming model and algorithm to prepare HPC applications for the next computing era: (1) a data streaming model that supports emerging data-intensive applications on supercomputers, (2) a decoupling model that improves parallelism and mitigates the impact of imbalance in applications, (3) a new framework and methodology for predicting the impact of largescale heterogeneous memory systems on HPC applications, and (4) a data placement algorithm that uses a set of rules and a decision tree to determine the data-to-memory mapping in heterogeneous main memory. The proposed approaches in this thesis are evaluated on multiple supercomputers with different processors and interconnect networks. The evaluation uses a diverse set of applications that represent conventional scientific applications and emerging data-analytic workloads on HPC systems. The experimental results on the petascale testbed show that the approaches obtain increasing performance improvements as system scale increases and this trend supports the approaches as a valuable contribution towards future HPC systems.
Storskaliga HPC-system är en viktig drivkraft för att lösa datorproblem i vetenskapliga samhällen. Nästa generations HPC-system kommer inte bara att växa i skala utan också i heterogenitet. Denna ökade systemkomplexitet medför flera utmaningar för dataförflyttning i HPC-applikationer. Dataförflyttning på nya HPC-system kräver asynkron, finkorrigerad kommunikation och en effektiv dataplacering i huvudminnet. Denna avhandling föreslår en innovativ programmeringsmodell och algoritm för att förbereda HPC-applikationer för nästa generation: (1) en dataströmningsmodell som stöder nya dataintensiva applikationer på superdatorer, (2) en kopplingsmodell som förbättrar parallelliteten och minskar obalans i applikationer, (3) en ny metologi och struktur för att förutse effekten av storskaliga, heterogena minnessystem på HPC-applikationer, och (4) en datalägesalgoritm som använder en uppsättning av regler och ett beslutsträd för att bestämma kartläggningen av data-till-minnet i det heterogena huvudminnet. Den föreslagna programmeringsmodellen i denna avhandling är utvärderad på flera superdatorer med olika processorer och sammankopplingsnät. Utvärderingen använder en mängd olika applikationer som representerar konventionella vetenskapliga applikationer och nya dataanalyser på HPC-system. Experimentella resultat på testbädden i petascala visar att programmeringsmodellen förbättrar prestandan när systemskalan ökar. Denna trend indikerar att modellen är ett värdefullt bidrag till framtida HPC-system.

QC 20171128

Style APA, Harvard, Vancouver, ISO itp.
33

Bunce, Emma J. "Large-scale current systems in the Jovian magnetosphere". Thesis, University of Leicester, 2001. http://hdl.handle.net/2381/30647.

Pełny tekst źródła
Streszczenie:
The studies contained within this thesis focus on the large-scale azimuthal and radial current systems of Jupiter's middle magnetosphere, i.e. currents with radial ranges of 20-50 RJ. In the first study using magnetometer data from Pioneer-10 and -11, Voyager-1 and -2, and Ulysses, it is discovered that the azimuthal current in the middle magnetosphere is not axi-symmetric as had been assumed for the last twenty-five years, but that it is stronger on the nightside than on the dayside at a given radial distance. A simple empirical model is formulated, which reasonably describes the data in the domain of interest both in radial distance and local time, and allows direct calculation of the current divergence associated with the asymmetry. In a similar way, in the following chapter the radial currents have been computed for the dawn sector of the jovian magnetosphere along various fly-by trajectories. Combination of these radial current estimations with the azimuthal current model allows the total divergence of the equatorial current to be calculated. These current densities mapped to the ionosphere are surprisingly large at ~1A m-2. In order to carry the current, the magnetosphere electrons must be strongly accelerated along the field lines into the ionosphere by voltages of the order of 100 kV. The resulting energy flux is enough to produce deep, bright (Mega Rayleigh) aurora and thus provides the first natural explanation of the main jovian auroral oval. In the final study, newly-available data from the Galileo orbiter mission are combined with the fly-by data in order to compare them to the model derived in the first study. The model is then re-derived for the entire data set, which significantly improves the associated fractional errors.
Style APA, Harvard, Vancouver, ISO itp.
34

Adam, Constantin M. "A middleware for self-managing large-scale systems /". Stockholm : School of Electrical Engineering (Institutionen för Elektrotekniska system), KTH, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4178.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Hendawy, Zeinab Mohamed. "Mathematical algorithms for optimisation of large scale systems". Thesis, City University London, 1989. http://openaccess.city.ac.uk/8248/.

Pełny tekst źródła
Streszczenie:
This research is concerned with the problem of optimisation of steady state large scale systems using mathematical models. Algorithms for on-line optimisation of interconnected industrial processes are investigated. The research is concerned with two different kinds of algorithms which are based on the structure of the model used and the way of incorporating the real process information in order to compensate for model-reality differences. The first class of algorithms are developed from the price method with global feedback information which is mainly based on the normal Lagrangian function. Two existing algorithms are examined: The double iterative price correction mechanism and the augmented interaction balance method. Both methods use a double iterative coordination strategy and global feedback measurements from the real process. They are based respectively on the normal and the augmented Lagrangian functions. Hence, the first algorithm can only be recommended for application to convex problems. An algorithm, namely the augmented price correction mechanism, has been developed to extend the applicability of the previous price correction mechanism to non-convex problems. It is also applicable to convex problems with the advantage of reducing the number of times that information is required from the real process. The second class of algorithms is known as integrated system optimisation and parameter estimation (ISOPE) • The model used contains uncertain parameters and the algorithm solves the optimisation and parameter estimation tasks repeatedly until no furthur improvement is obtained. Developed ISOPE algorithms are involved in this research to cover the problems with output dependent constraints. Simulation results show superiority of the double iterative algorithm over that of single loop method in considerably reducing the number of times that information is required from the real process and hence saving on-line computing time. It is hoped that this work will provide useful information for implementing and furthur developing on-line steady state optimisation techniques.
Style APA, Harvard, Vancouver, ISO itp.
36

Shepherd, Simon John. "A distributed security architecture for large scale systems". Thesis, University of Plymouth, 1992. http://hdl.handle.net/10026.1/2143.

Pełny tekst źródła
Streszczenie:
This thesis describes the research leading from the conception, through development, to the practical implementation of a comprehensive security architecture for use within, and as a value-added enhancement to, the ISO Open Systems Interconnection (OSI) model. The Comprehensive Security System (CSS) is arranged basically as an Application Layer service but can allow any of the ISO recommended security facilities to be provided at any layer of the model. It is suitable as an 'add-on' service to existing arrangements or can be fully integrated into new applications. For large scale, distributed processing operations, a network of security management centres (SMCs) is suggested, that can help to ensure that system misuse is minimised, and that flexible operation is provided in an efficient manner. The background to the OSI standards are covered in detail, followed by an introduction to security in open systems. A survey of existing techniques in formal analysis and verification is then presented. The architecture of the CSS is described in terms of a conceptual model using agents and protocols, followed by an extension of the CSS concept to a large scale network controlled by SMCs. A new approach to formal security analysis is described which is based on two main methodologies. Firstly, every function within the system is built from layers of provably secure sequences of finite state machines, using a recursive function to monitor and constrain the system to the desired state at all times. Secondly, the correctness of the protocols generated by the sequences to exchange security information and control data between agents in a distributed environment, is analysed in terms of a modified temporal Hoare logic. This is based on ideas concerning the validity of beliefs about the global state of a system as a result of actions performed by entities within the system, including the notion of timeliness. The two fundamental problems in number theory upon which the assumptions about the security of the finite state machine model rest are described, together with a comprehensive survey of the very latest progress in this area. Having assumed that the two problems will remain computationally intractable in the foreseeable future, the method is then applied to the formal analysis of some of the components of the Comprehensive Security System. A practical implementation of the CSS has been achieved as a demonstration system for a network of IBM Personal Computers connected via an Ethernet LAN, which fully meets the aims and objectives set out in Chapter 1. This implementation is described, and finally some comments are made on the possible future of research into security aspects of distributed systems.
Style APA, Harvard, Vancouver, ISO itp.
37

Dialani, Vijay K. "Adaptive resource management in large scale distributed systems". Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.423041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Adly, Noha. "Management of replicated data in large scale systems". Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388345.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Panneer, Selvan Vaina Malar. "Energy efficiency maximisation in large scale MIMO systems". Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/16052.

Pełny tekst źródła
Streszczenie:
The power usage of the communication technology industry and the consistent energy-related pollution are becoming major societal and economic concerns. These concern stimulated academia and industry to an intense activity in the new research area of green cellular networks. Bandwidth Efficiency (BE) is one of the most important metrics to select candidate technologies for next-generation wireless communications systems. Nevertheless, the important goal is to design new innovative network architecture and technologies needed to encounter the explosive development in cellular data demand without increasing the power consumption. As a result, Energy Efficiently (EE) has become another significant metric for evaluating the performance of wireless communications systems. MIMO technology has drawn lots of attention in wireless communication, as it gives substantial increases in link range and throughput without an additional increase in bandwidth or transmits power. Multi-user MIMO (MU-MIMO) regarded when evolved Base Station equipped with multiple antennas communicates with several User Terminal (UEs) at the same time. MU-MIMO is capable of improving either the reliability or the BE by improving either the multiplexing gains or diversity gains. A proposed new idea in MU-MIMO refers to the system that uses hundreds of antennas to serve dozens of UEs simultaneously. This so-called, Large Scale-MIMO (LS MIMO) regarded as a candidate technique for future wireless communication systems. An analysis is conducted to investigate the performance of the proposed uplink and downlink of LS MIMO systems with different linear processing techniques at the base station. The most common precoding and receive combining are considered: minimum mean squared error (MMSE), maximum ratio transmission/combining (MRT/MRC), and zero-forcing (ZF)processing. The fundamental problems answered on how to select the number of (BS) antennas M, number of active (UEs) K, and the transmit power to cover a given area with maximal EE. The EE is defined as the number of bits transferred per Joule of energy. A new power consumption model is proposed to emphasise that the real power scales faster with M and K than scaling linearly. The new power consumption model is utilised for deriving closed-form EE maximising values of the number of BS antennas, number of active UEs and transmit power under the assumption that ZF processing is deployed in the uplink and downlink transmissions for analytic convenience. This analysis is then extended to the imperfect CSI case and to symmetric multi-cell scenarios. These expressions provide valuable design understandings on the interaction between systems parameters, propagation environment, and different components of the power consumption model. Analytical results are assumed only for ZF with perfect channel state information (CSI) to compute closed-form expression for the optimal number of UEs, number of BS antennas, and transmit power. Numerical results are provided (a) for all the investigated schemes with perfect CSI and in a single-cell scenario; (b) for ZF with imperfect CSI, and in a multi-cell scenario. The simulation results show that (a) an LS MIMO with 100 - 200 BS antennas are the correct number of antennas for energy efficiency maximisation; (b) these number of BS antennas should serve number of active UEs of the same size; (c) since the circuit power increases the transmit power should increase with number of BS antennas; (d) the radiated power antenna is in the range of 10-100 mW and decreases with number of BS antennas; (e) ZF processing provides the highest EE in all the scenarios due to active interference-suppression at affordable complexity. Therefore, these are highly relevant results that prove LS MIMO is the technique to achieve high EE in future cellular networks.
Style APA, Harvard, Vancouver, ISO itp.
40

Favela, Jesus. "Organizational memory management for large-scale systems development". Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12238.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Liem, Rhea Patricia. "Surrogate modeling for large-scale black-box systems". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41559.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 105-110).
This research introduces a systematic method to reduce the complexity of large-scale blackbox systems for which the governing equations are unavailable. For such systems, surrogate models are critical for many applications, such as Monte Carlo simulations; however, existing surrogate modeling methods often are not applicable, particularly when the dimension of the input space is very high. In this research, we develop a systematic approach to represent the high-dimensional input space of a large-scale system by a smaller set of inputs. This collection of representatives is called a multi-agent collective, forming a surrogate model with which an inexpensive computation replaces the original complex task. The mathematical criteria used to derive the collective aim to avoid overlapping of characteristics between representatives, in order to achieve an effective surrogate model and avoid redundancies. The surrogate modeling method is demonstrated on a light inventory that contains light data corresponding to 82 aircraft types. Ten aircraft types are selected by the method to represent the full light inventory for the computation of fuel burn estimates, yielding an error between outputs from the surrogate and full models of just 2.08%. The ten representative aircraft types are selected by first aggregating similar aircraft types together into agents, and then selecting a representative aircraft type for each agent. In assessing the similarity between aircraft types, the characteristic of each aircraft type is determined from available light data instead of solving the fuel burn computation model, which makes the assessment procedure inexpensive.
(cont.) Aggregation criteria are specified to quantify the similarity between aircraft types and a stringency, which controls the tradeoff between the two competing objectives in the modeling -- the number of representatives and the estimation error. The surrogate modeling results are compared to a model obtained via manual aggregation; that is, the aggregation of aircraft types is done based on engineering judgment. The surrogate model derived using the systematic approach yields fewer representatives in the collective, yielding a surrogate model with lower computational cost, while achieving better accuracy. Further, the systematic approach eliminates the subjectivity that is inherent in the manual aggregation method. The surrogate model is also applied to other light inventories, yielding errors of similar magnitude to the case when the reference light inventory is considered.
by Rhea Patricia Liem.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
42

Zhang, Richard Yi. "Robust stability analysis for large-scale power systems". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108846.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 145-154).
Innovations in electric power systems, such as renewable energy, demand-side participation, and electric vehicles, are all expected to increase variability and uncertainty, making stability verification more challenging. This thesis extends the technique of robust stability analysis to large-scale electric power systems under uncertainty. In the first part of this thesis, we examine the use of the technique to solve real problems faced by grid operators. We present two case studies: small-signal stability for distributed renewables on the IEEE 118-bus test system, and large-signal stability for a microgrid system. In each case study, we show that robust stability analysis can be used to compute stability margins for entire collections of uncertain scenarios. In the second part of this thesis, we develop scalable algorithms to solve robust stability analysis problems on large-scale power systems. We use preconditioned iterative methods to solve the Newton direction computation in the interior-point method, in order to avoid the O(n6) time complexity associated with a dense-matrix approach. The per-iteration costs of the iterative methods are reduced to O(n3) through a hierarchical block-diagonal-plus-low-rank structure in the data matrices. We provide evidence that the methods converge to an [epsilon]-accurate solution in O(1=[square root of ] [epsilon]) iterations, and characterize two broad classes of problems for which the enhanced convergence is guaranteed.
by Richard Yi Zhang.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
43

Rodriquez-Toral, Miguel Angel. "Synthesis and optimisation of large-scale utility systems". Thesis, University of Edinburgh, 1999. http://hdl.handle.net/1842/12878.

Pełny tekst źródła
Streszczenie:
This research is focused on the simulation, optimisation and synthesis of utility systems, in particular combined heat and power (CHP) systems. These systems involve gas and steam turbines, steam generation at different pressure levels, condensing equipment and auxiliaries. The CHP systems are of substantial industrial interest for the efficient supply of heat and power. As they are highly integrated processes their size and implicit complexity requires the use of an equation oriented (EO) framework including the models developed in this research. An EO mathematical model for the simulation, optimisation and synthesis of CHP systems has been developed. It includes models for the simultaneous solution of all process streams, every major piece of equipment and investment and operating costs. Several EO simulation examples, from simple unit operations, a whole real cogeneration plant involving a commercial gas turbine with 1275 variables and equations up to a synthesis model with 3042 variables for a fixed structure, are used to demonstrate the applicability of the CHP model and the EO framework. A number of energy and economic optimisation problems were solved using a SQP (Sequential Quadraic Programming) method. Both the EO model and the use of the SQP code was fully explored by experiments in a model with two steam turbines. In addition the following utility systems were optimised: a model of a combined heat and power plant; industrial size problems including a model for a cogeneration plant currently in operation and a synthesis model for a fixed structure. An important contribution made to solve EO simulation problems for CHP systems was to obtain converged solutions to plant sections which then were used as starting guess for larger plant sections until whole systems are simulated. Also this strategy was used to provide a warm starting guess for the efficient solution of large continuous optimisation problems of CHP systems.
Style APA, Harvard, Vancouver, ISO itp.
44

Huynh, Phuong. "Stability analysis of large-scale power electronics systems". Diss., Virginia Tech, 1994. http://hdl.handle.net/10919/40205.

Pełny tekst źródła
Streszczenie:
A new methodology is proposed to investigate the large-signal stability of interconnected power electronics systems. The approach consists of decoupling the system into a source subsystem and a load subsystem, and stability of the entire system can be analyzed based on investigating the feedback loop formed by the interconnected source/load system. The proposed methodology requires two stages: (1) since the source and the load are unknown nonlinear subsystems, system identification, which consists of isolating each subsystem into a series combination of a linear part and a nonlinear part, must be performed, and (2) stability analysis of the interconnected system is conducted thereafter based on a developed stability criterion suitable for the nonlinear interconnected-block-structure model. Applicability of the methodology is verified through stability analysis of PWM converters and a typical power electronics system.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
45

Sakai, Kazuya. "Security and Privacy in Large-Scale RFID Systems". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1386006971.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Buccapatnam, Tirumala Swapna. "Control of Large Scale Networked Systems Under Uncertainty". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1416764356.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Hosny, Sameh Shawky Ibrahim. "LARGE SCALE LINEAR OPTIMIZATION FOR WIRELESS COMMUNICATION SYSTEMS". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1482232039296433.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Li, Sen. "Transactive Control for Large-Scale Cyber-Physical Systems". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1511397616555155.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Zhao, Lin. "Aggregate Modeling of Large-Scale Cyber-Physical Systems". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1512111263124549.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Qian, Chen. "Efficient cardinality counting for large-scale RFID systems /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20QIAN.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii