Thèses sur le sujet « Dynamics, Distributed Computing »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Dynamics, Distributed Computing.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Dynamics, Distributed Computing ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Weed, Richard Allen. « Computational strategies for three-dimensional flow simulations on distributed computing systems ». Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12154.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

RIZZO, SARA. « Simple Dynamics as Algorithms and Models ». Doctoral thesis, Gran Sasso Science Institute, 2021. http://hdl.handle.net/20.500.12571/21452.

Texte intégral
Résumé :
The theory of Distributed Computing copes with systems composed of computa- tional entities able to interact with each other in order to reach a common goal in the most ecient way. Distributed models are used to study many phenomena that come from di↵erent disciplines such as computer science, physics, modern social sci- ence and biology. Common features of such systems are the lack of central control, a huge number of involved individuals, limited communication and computational power, presence of communication noise and faults propensity. Natural systems are able to solve very challenging tasks, relying on limited communication and com- putational resources, with undistinguishable entities. Moreover, at the right level of abstraction, natural and artificial systems solve the same problems: consensus, synchronization, fault tolerance and noise overcoming are some of them. Recently, these observations led researchers in the field to focus on the design and the analysis of simple and light-weight distributed protocols. This line of research includes the analysis of those processes that go by the name of Dynamics. Dynamics are simple stochastic processes on anonymous networks that evolve in rounds and in which each node has an initial state and updates it over time according to a function of its state and the states of its neighbors. Measures of interest in this kind of processes are the number of rounds needed to achieve the desired configuration, the storage capacity of every node and the size of the exchanged messages. In this thesis, we move a step forward in the analysis of dynamics and their ability to solve community detection and consensus problems. In the first part, we formally prove the e↵ectiveness of the Averaging dynamics in solving the community detection problem on a class of graphs containing a hidden k partition and characterized by a mild form of regularity. In the second part, we study the consensus time of some dynamics based on majority rules, introducing di↵erent forms of bias. In particular, in the context of opinion dynamics we define a unified framework to investigate di↵erent update rules when a bias toward one of the two possible opinions exists. The results show that the consensus time is extremely a↵ected by the underlying network structure and opinion dynamics and their interplay may elicit quite di↵erent collective behaviors. Finally, we analyze the k-Majority dynamics in a biased communication model where nodes have some probability to see a fixed state in their neighbors, as if in presence of binary asymmetric communication channels. In this setting, we identify sharp phase transitions on the bias and on the initial configuration.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bangalore, Ashok K. « Computational fluid dynamic studies of high lift rotor systems using distributed computing ». Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/12949.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Liu, Xing. « High-performance algorithms and software for large-scale molecular simulation ». Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53487.

Texte intégral
Résumé :
Molecular simulation is an indispensable tool in many different disciplines such as physics, biology, chemical engineering, materials science, drug design, and others. Performing large-scale molecular simulation is of great interest to biologists and chemists, because many important biological and pharmaceutical phenomena can only be observed in very large molecule systems and after sufficiently long time dynamics. On the other hand, molecular simulation methods usually have very steep computational costs, which limits current molecular simulation studies to relatively small systems. The gap between the scale of molecular simulation that existing techniques can handle and the scale of interest has become a major barrier for applying molecular simulation to study real-world problems. In order to study large-scale molecular systems using molecular simulation, it requires developing highly parallel simulation algorithms and constantly adapting the algorithms to rapidly changing high performance computing architectures. However, many existing algorithms and codes for molecular simulation are from more than a decade ago, which were designed for sequential computers or early parallel architectures. They may not scale efficiently and do not fully exploit features of today's hardware. Given the rapid evolution in computer architectures, the time has come to revisit these molecular simulation algorithms and codes. In this thesis, we demonstrate our approach to addressing the computational challenges of large-scale molecular simulation by presenting both the high-performance algorithms and software for two important molecular simulation applications: Hartree-Fock (HF) calculations and hydrodynamics simulations, on highly parallel computer architectures. The algorithms and software presented in this thesis have been used by biologists and chemists to study some problems that were unable to solve using existing codes. The parallel techniques and methods developed in this work can be also applied to other molecular simulation applications.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ward, Koeck Alan. « Modeling and distributed computing of snow transport and delivery on meso-scale in a complex orography ». Doctoral thesis, Universitat Oberta de Catalunya, 2015. http://hdl.handle.net/10803/327598.

Texte intégral
Résumé :
Aquest estudi descriu els principis de funcionament i validació d'un model d'ordinador de dinàmica de fluids computacional del procés de caiguda de neu sobre una orografia complexa. Es discretitza el domini espacial amb l'èmfasi principal sobre una topografia dificultosa que tendeix a produir volums deformes en la graella de càlcul. Es defineix una nova mesura de la deformació dels elements de la graella, i s'aplica per la discussió de diferents estratègies d'optimització de la graella per reduir el cost del càlcul paral·lel per ordinador de solucions a les equacions de transport de fluids de Navier-Stokes. Es dissenya un model per ordinador que resolgui les equacions Navier-Stokes per un fluid incompressible i torbulent. Es debat de l'eficiència de la caixa d'eines CFD. Es treballa el grau de connexió necessari entre les dues fases de neu i d'aire del fluid durant la modelització de la caiguda de neu mitjançant ordinador. S'implementa una metodologia Euler-Lagrangian de dos fluids. Es presenten aplicacions de caiguda de neu en relació amb la planificació de pistes d'esquí, la treta de neu de carreteres d'alta muntanya, i la planificació de la producció d'energia eòlica.
El estudio describe los principios de funcionamiento ya validación de un modelo para ordenador de dinámica de fluidos computacional del proceso de caída de nieve sobre una orografía compleja. Se discretea el dominio espacial con énfasis principal sobre una topografía dificultosa que tiende a producir volúmenes deformes en la cuadrícula de cálculo. Se define una nueva mesura de la deformación de los elementos de la cuadrícula, y se aplica en la discusión de diferentes estrategias de optimización de la cuadrícula para reducir el coste del cálculo paralelo por ordenador de soluciones de las ecuaciones de transporte de fluidos de Navier-Stokes. Se diseña un modelo por ordenador que resuelve las ecuaciones Navier-Stokes para un fluido incomprensible y turbulento. Se discute la eficiencia de la caja de herramientas CFD. Se trabaja el grado de conexión necesario entre las dos fases de nieve y de aire del fluido durante la modelización de la caída de nieve por ordenador. Se implementa una metodología Euler-Lagrangian de dos fluidos. Se presentan aplicaciones de caída de nieve en relación con la planificación de pistas de esquí, sacar la nieve de carreteras de alta momntaña, y la planificación de la producción de energía eólica.
This study describes the working principles and validation of a Computational Fluid Dynamics computer model of snowfall over a complex orography, for optimizing ski slope or other installations according to local weather patterns. The spatial domain is discretized, focusing on challenging topography that tends to produce deformed mesh volumes. A novel measure od mesh deformation is defined and applied to discuss different strategies of mesh optimization with the goal of facilitating parallel computer solutions of the Navier-Stokes fluid transport equations. A computer model is designed to solve the Navier-Stokes incompressible turbulent fluid equations. The efficiency of the CFD computational toolkit is discussed. The degree od coupling required between the snow – and air-phases of the fluid during the computer modeling of snowfall is discussed. A two-fluid (Euler-Lagrangian) methodology is implemented. Applications of such snowfall models are discussed in relation to ski-slope planning and high-altitude road snow clearing. An application of the model to wind energy production planning is presented.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Buch, Mundó Ignasi 1984. « Investigation of protein-ligand interactions using high-throughput all-atom molecular dynamics simulations ». Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/101407.

Texte intégral
Résumé :
Investigation of protein-ligand interactions has been a long-standing application for molecular dynamics (MD) simulations given its importance to drug design. However, relevant timescales for biomolecular motions are orders of magnitude longer than the commonly accessed simulation times. Adequate sampling of biomolecular phase-space has therefore been a major challenge in computational modeling that has limited its applicability. The primary objective for this thesis has been the brute-force simulation of costly protein-ligand binding modeling experiments on a large computing infrastructure. We have built and developed GPUGRID: a peta-scale distributed computing infrastructure for high-throughput MD simulations. We have used GPUGRID for the calculation of protein-ligand binding free energies as well as for the reconstruction of binding processes through unguided ligand binding simulations. The promising results presented herein, may have set the grounds for future applications of high-throughput MD simulations to drug discovery programs.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gao, Yiran. « Dynamic inter-domain distributed computing ». Thesis, Queen Mary, University of London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510898.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kelley, Ian Robert. « Data management in dynamic distributed computing environments ». Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/44477/.

Texte intégral
Résumé :
Data management in parallel computing systems is a broad and increasingly important research topic. As network speeds have surged, so too has the movement to transition storage and computation loads to wide-area network resources. The Grid, the Cloud, and Desktop Grids all represent different aspects of this movement towards highly-scalable, distributed, and utility computing. This dissertation contends that a peer-to-peer (P2P) networking paradigm is a natural match for data sharing within and between these heterogeneous network architectures. Peer-to-peer methods such as dynamic discovery, fault-tolerance, scalability, and ad-hoc security infrastructures provide excellent mappings for many of the requirements in today’s distributed computing environment. In recent years, volunteer Desktop Grids have seen a growth in data throughput as application areas expand and new problem sets emerge. These increasing data needs require storage networks that can scale to meet future demand while also facilitating expansion into new data-intensive research areas. Current practices are to mirror data from centralized locations, a technique that is not practical for growing data sets, dynamic projects, or data-intensive applications. The fusion of Desktop and Service Grids provides an ideal use-case to research peer-to-peer data distribution strategies in a hybrid environment. Desktop Grids have a data management gap, while integration with Service Grids raises new challenges with regard to cross-platform design. The work undertaken here is two-fold: first it explores how P2P techniques can be leveraged to meet the data management needs of Desktop Grids, and second, it shows how the same distribution paradigm can provide migration paths for Service Grid data. The result of this research is a Peer-to-Peer Architecture for Data-Intensive Cycle Sharing (ADICS) that is capable not only of distributing volunteer computing data, but also of providing a transitional platform and storage space for migrating Service Grid jobs to Desktop Grid environments.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fletcher, Luke. « A Dynamic Networked Browser Environment for Distributed Computing ». Thesis, Honours thesis, University of Tasmania, 2002. https://eprints.utas.edu.au/38/1/Java_Distributed_Net_Thesis.pdf.

Texte intégral
Résumé :
Many organisations have a large number of computers with varying usage patterns. Some of these machines at different locations are often free from time to time leaving them to do very little useful computation or none at all. It is at these times that this dynamically changing environment of machines can be used for a more useful task. This project reports the development and feasibility testing of a dynamic distributed computing environment. This is achieved by making use of ubiquitous web browsers to harness these underutilised computers. Therefore taking the idea of distributed computing away from the traditional paradigm of fixed hosts to which it is often associated.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Lepler, Joerg. « Creating dynamic application behavior for distributed performance analysis ». Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8201.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tosi, Riccardo. « Towards stochastic methods in CFD for engineering applications ». Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/673389.

Texte intégral
Résumé :
Recent developments of high performance computing capabilities allow solving modern science problems employing sophisticated computational techniques. However, it is necessary to ensure the efficiency of state of the art computational methods to fully take advantage of modern technology capabilities. In this thesis we propose uncertainty quantification and high performance computing strategies to solve fluid dynamics systems characterized by uncertain conditions and unknown parameters. We verify that such techniques allow us to take decisions faster and ensure the reliability of simulation results. Different sources of uncertainties can be relevant in computational fluid dynamics applications. For example, we consider the shape and time variability of boundary conditions, as well as the randomness of external forces acting on the system. From a practical point of view, one has to estimate statistics of the flow, and a failure probability convergence criterion must be satisfied by the statistical estimator of interest to assess reliability. We use hierarchical Monte Carlo methods as uncertainty quantification strategy to solve stochastic systems. Such algorithms present three levels of parallelism: over levels, over realizations per level, and on the solution of each realization. We propose an improvement by adding a new level of parallelism, between batches, where each batch has its independent hierarchy. These new methods are called asynchronous hierarchical Monte Carlo, and we demonstrate that such techniques take full advantage of concurrency capabilities of modern high performance computing environments, while preserving the same reliability of state of the art methods. Moreover, we focus on reducing the wall clock time required to compute statistical estimators of chaotic incompressible flows. Our approach consists in replacing a single long-term simulation with an ensemble of multiple independent realizations, which are run in parallel with different initial conditions. The error analysis of the statistical estimator leads to the identification of two error contributions: the initialization bias and the statistical error. We propose an approach to systematically detect the burn-in time to minimize the initialization bias, accompanied by strategies to reduce the simulation cost. Finally, we propose an integration of Monte Carlo and ensemble averaging methods for reducing the wall clock time required for computing statistical estimators of time-dependent stochastic turbulent flows. A single long-term Monte Carlo realization is replaced by an ensemble of multiple independent realizations, each characterized by the same random event and different initial conditions. We consider different systems, relevant in the computational fluid dynamics engineering field, as realistic wind flowing around high-rise buildings or compressible potential flow problems. By solving such numerical examples, we demonstrate the accuracy, efficiency, and effectiveness of our proposals.
Los desarrollos relacionados con la computación de alto rendimiento de las últimas décadas permiten resolver problemas científicos actuales, utilizando métodos computacionales sofisticados. Sin embargo, es necesario asegurarse de la eficiencia de los métodos computacionales modernos, con el fin de explotar al máximo las capacidades tecnológicas. En esta tesis proponemos diferentes métodos, relacionados con la cuantificación de incertidumbres y el cálculo de alto rendimiento, con el fin de minimizar el tiempo de computación necesario para resolver las simulaciones y garantizar una alta fiabilidad. En concreto, resolvemos sistemas de dinámica de fluidos caracterizados por incertidumbres. En el campo de la dinámica de fluidos computacional existen diferentes tipos de incertidumbres. Nosotros consideramos, por ejemplo, la forma y la evolución en el tiempo de las condiciones de frontera, así como la aleatoriedad de las fuerzas externas que actúan sobre el sistema. Desde un punto de vista práctico, es necesario estimar valores estadísticos del flujo del fluido, cumpliendo los criterios de convergencia para garantizar la fiabilidad del método. Para cuantificar el efecto de las incertidumbres utilizamos métodos de Monte Carlo jerárquicos, también llamados hierarchical Monte Carlo methods. Estas estrategias tienen tres niveles de paralelización: entre los niveles de la jerarquía, entre los eventos de cada nivel y durante la resolución del evento. Proponemos agregar un nuevo nivel de paralelización, entre batches, en el cual cada batch es independiente de los demás y tiene su propia jerarquía, compuesta por niveles y eventos distribuidos en diferentes niveles. Definimos estos nuevos algoritmos como métodos de Monte Carlo asíncronos y jerárquicos, cuyos nombres equivalentes en inglés son asynchronous hierarchical Monte Carlo methods. También nos enfocamos en reducir el tiempo de computación necesario para calcular estimadores estadísticos de flujos de fluidos caóticos e incompresibles. Nuestro método consiste en reemplazar una única simulación de dinámica de fluidos, caracterizada por una ventana de tiempo prolongada, por el promedio de un conjunto de simulaciones independientes, caracterizadas por diferentes condiciones iniciales y una ventana de tiempo menor. Este conjunto de simulaciones se puede ejecutar en paralelo en superordenadores, reduciendo el tiempo de computación. El método de promedio de conjuntos se conoce como ensemble averaging. Analizando las diferentes contribuciones del error del estimador estadístico, identificamos dos términos: el error debido a las condiciones iniciales y el error estadístico. En esta tesis proponemos un método que minimiza el error debido a las condiciones iniciales, y en paralelo sugerimos varias estrategias para reducir el coste computacional de la simulación. Finalmente, proponemos una integración del método de Monte Carlo y del método de ensemble averaging, cuyo objetivo es reducir el tiempo de computación requerido para calcular estimadores estadísticos de problemas de dinámica de fluidos dependientes del tiempo, caóticos y estocásticos. Reemplazamos cada realización de Monte Carlo por un conjunto de realizaciones independientes, cada una caracterizada por el mismo evento aleatorio y diferentes condiciones iniciales. Consideramos y resolvemos diferentes sistemas físicos, todos relevantes en el campo de la dinámica de fluidos computacional, como problemas de flujo del viento alrededor de rascacielos o problemas de flujo potencial. Demostramos la precisión, eficiencia y efectividad de nuestras propuestas resolviendo estos ejemplos numéricos.
Gli sviluppi del calcolo ad alte prestazioni degli ultimi decenni permettono di risolvere problemi scientifici di grande attualità, utilizzando sofisticati metodi computazionali. È però necessario assicurarsi dell’efficienza di questi metodi, in modo da ottimizzare l’uso delle odierne conoscenze tecnologiche. A tal fine, in questa tesi proponiamo diversi metodi, tutti inerenti ai temi di quantificazione di incertezze e calcolo ad alte prestazioni. L’obiettivo è minimizzare il tempo necessario per risolvere le simulazioni e garantire alta affidabilità. Nello specifico, utilizziamo queste strategie per risolvere sistemi fluidodinamici caratterizzati da incertezze in macchine ad alte prestazioni. Nel campo della fluidodinamica computazionale esistono diverse tipologie di incertezze. In questo lavoro consideriamo, ad esempio, il valore e l’evoluzione temporale delle condizioni di contorno, così come l’aleatorietà delle forze esterne che agiscono sul sistema fisico. Dal punto di vista pratico, è necessario calcolare una stima delle variabili statistiche del flusso del fluido, soddisfacendo criteri di convergenza, i quali garantiscono l’accuratezza del metodo. Per quantificare l’effetto delle incertezze sul sistema utilizziamo metodi gerarchici di Monte Carlo, detti anche hierarchical Monte Carlo methods. Queste strategie presentano tre livelli di parallelizzazione: tra i livelli della gerarchia, tra gli eventi di ciascun livello e durante la risoluzione del singolo evento. Proponiamo di aggiungere un nuovo livello di parallelizzazione, tra gruppi (batches), in cui ogni batch sia indipendente dagli altri ed abbia una propria gerarchia, composta da livelli e da eventi distribuiti su diversi livelli. Definiamo questi nuovi algoritmi come metodi asincroni e gerarchici di Monte Carlo, il cui corrispondente in inglese è asynchronous hierarchical Monte Carlo methods. Ci focalizziamo inoltre sulla riduzione del tempo di calcolo necessario per stimare variabili statistiche di flussi caotici ed incomprimibili. Il nostro metodo consiste nel sostituire un’unica simulazione fluidodinamica, caratterizzata da un lungo arco temporale, con il valore medio di un insieme di simulazioni indipendenti, caratterizzate da diverse condizioni iniziali ed un arco temporale minore. Questo insieme 10 di simulazioni può essere eseguito in parallelo in un supercomputer, riducendo il tempo di calcolo. Questo metodo è noto come media di un insieme o, in inglese, ensemble averaging. Calcolando la stima di variabili statistiche, commettiamo due errori: l’errore dovuto alle condizioni iniziali e l’errore statistico. In questa tesi proponiamo un metodo per minimizzare l’errore dovuto alle condizioni iniziali, ed in parallelo suggeriamo diverse strategie per ridurre il costo computazionale della simulazione. Infine, proponiamo un’integrazione del metodo di Monte Carlo e del metodo di ensemble averaging, il cui obiettivo è ridurre il tempo di calcolo necessario per stimare variabili statistiche di problemi di fluidodinamica dipendenti dal tempo, caotici e stocastici. Ogni realizzazione di Monte Carlo è sostituita da un insieme di simulazioni indipendenti, ciascuna caratterizzata dallo stesso evento casuale, da differenti condizioni iniziali e da un arco temporale minore. Consideriamo e risolviamo differenti sistemi fisici, tutti rilevanti nel campo della fluidodinamica computazionale, come per esempio problemi di flusso del vento attorno a grattacieli, o sistemi di flusso potenziale. Dimostriamo l’accuratezza, l’efficienza e l’efficacia delle nostre proposte, risolvendo questi esempi numerici.
Enginyeria civil
Styles APA, Harvard, Vancouver, ISO, etc.
12

Azevedo, Perdicoulis Teresa-Paula C. « A distributed model for dynamic optimisation of networks ». Thesis, University of Salford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300499.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Kaya, Ozgur. « Efficient Scheduling In Distributed Computing On Grid ». Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607928/index.pdf.

Texte intégral
Résumé :
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Dramlitsch, Thomas. « Distributed computations in a dynamic, heterogeneous Grid environment ». Phd thesis, Universität Potsdam, 2002. http://opus.kobv.de/ubp/volltexte/2005/79/.

Texte intégral
Résumé :
Die immer dichtere und schnellere Vernetzung von Rechnern und Rechenzentren über Hochgeschwindigkeitsnetzwerke ermöglicht eine neue Art des wissenschaftlich verteilten Rechnens, bei der geographisch weit auseinanderliegende Rechenkapazitäten zu einer Gesamtheit zusammengefasst werden können. Dieser so entstehende virtuelle Superrechner, der selbst aus mehreren Grossrechnern besteht, kann dazu genutzt werden Probleme zu berechnen, für die die einzelnen Grossrechner zu klein sind. Die Probleme, die numerisch mit heutigen Rechenkapazitäten nicht lösbar sind, erstrecken sich durch sämtliche Gebiete der heutigen Wissenschaft, angefangen von Astrophysik, Molekülphysik, Bioinformatik, Meteorologie, bis hin zur Zahlentheorie und Fluiddynamik um nur einige Gebiete zu nennen.

Je nach Art der Problemstellung und des Lösungsverfahrens gestalten sich solche "Meta-Berechnungen" mehr oder weniger schwierig. Allgemein kann man sagen, dass solche Berechnungen um so schwerer und auch um so uneffizienter werden, je mehr Kommunikation zwischen den einzelnen Prozessen (oder Prozessoren) herrscht. Dies ist dadurch begründet, dass die Bandbreiten bzw. Latenzzeiten zwischen zwei Prozessoren auf demselben Grossrechner oder Cluster um zwei bis vier Grössenordnungen höher bzw. niedriger liegen als zwischen Prozessoren, welche hunderte von Kilometern entfernt liegen.

Dennoch bricht nunmehr eine Zeit an, in der es möglich ist Berechnungen auf solch virtuellen Supercomputern auch mit kommunikationsintensiven Programmen durchzuführen. Eine grosse Klasse von kommunikations- und berechnungsintensiven Programmen ist diejenige, die die Lösung von Differentialgleichungen mithilfe von finiten Differenzen zum Inhalt hat. Gerade diese Klasse von Programmen und deren Betrieb in einem virtuellen Superrechner wird in dieser vorliegenden Dissertation behandelt. Methoden zur effizienteren Durchführung von solch verteilten Berechnungen werden entwickelt, analysiert und implementiert. Der Schwerpunkt liegt darin vorhandene, klassische Parallelisierungsalgorithmen zu analysieren und so zu erweitern, dass sie vorhandene Informationen (z.B. verfügbar durch das Globus Toolkit) über Maschinen und Netzwerke zur effizienteren Parallelisierung nutzen. Soweit wir wissen werden solche Zusatzinformationen kaum in relevanten Programmen genutzt, da der Grossteil aller Parallelisierungsalgorithmen implizit für die Ausführung auf Grossrechnern oder Clustern entwickelt wurde.
In order to face the rapidly increasing need for computational resources of various scientific and engineering applications one has to think of new ways to make more efficient use of the worlds current computational resources. In this respect, the growing speed of wide area networks made a new kind of distributed computing possible: Metacomputing or (distributed) Grid computing. This is a rather new and uncharted field in computational science. The rapidly increasing speed of networks even outperforms the average increase of processor speed: Processor speeds double on average each 18 month whereas network bandwidths double every 9 months. Due to this development of local and wide area networks Grid computing will certainly play a key role in the future of parallel computing.

This type of distributed computing, however, distinguishes from the traditional parallel computing in many ways since it has to deal with many problems not occurring in classical parallel computing. Those problems are for example heterogeneity, authentication and slow networks to mention only a few. Some of those problems, e.g. the allocation of distributed resources along with the providing of information about these resources to the application have been already attacked by the Globus software.

Unfortunately, as far as we know, hardly any application or middle-ware software takes advantage of this information, since most parallelizing algorithms for finite differencing codes are implicitly designed for single supercomputer or cluster execution. We show that although it is possible to apply classical parallelizing algorithms in a Grid environment, in most cases the observed efficiency of the executed code is very poor.

In this work we are closing this gap. In our thesis, we will
- show that an execution of classical parallel codes in Grid environments is possible but very slow
- analyze this situation of bad performance, nail down bottlenecks in communication, remove unnecessary overhead and other reasons for low performance
- develop new and advanced algorithms for parallelisation that are aware of a Grid environment in order to generelize the traditional parallelization schemes
- implement and test these new methods, replace and compare with the classical ones - introduce dynamic strategies that automatically adapt the running code to the nature of the underlying Grid environment.

The higher the performance one can achieve for a single application by manual tuning for a Grid environment, the lower the chance that those changes are widely applicable to other programs. In our analysis as well as in our implementation we tried to keep the balance between high performance and generality. None of our changes directly affect code on the application level which makes our algorithms applicable to a whole class of real world applications.

The implementation of our work is done within the Cactus framework using the Globus toolkit, since we think that these are the most reliable and advanced programming frameworks for supporting computations in Grid environments. On the other hand, however, we tried to be as general as possible, i.e. all methods and algorithms discussed in this thesis are independent of Cactus or Globus.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Yu, Lihua. « Optimization of multi-scale decision-oriented dynamic systems and distributed computing ». Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/280505.

Texte intégral
Résumé :
In this dissertation, a stochastic programming model is presented for multi-scale decision-oriented dynamic systems (DODS) which are discrete-time systems in which decisions are made according to alternative discrete-time sequences which depend upon the organizational layer within a hierarchical system. A multi-scale DODS consists of multiple modules, each of which makes decisions on a time-scale that matches their specific task. For instance, in a large production planning system, the aggregate planning module may make decisions on a quarterly basis, whereas, weekly, and daily planning may use short-term scheduling models. In order to avoid mismatches between these schedules, it is important to integrate the short-term and long-term models. In studying models that accommodate multiple time-scales, one of the challenges that must be overcome is the incorporation of uncertainty. For instance, aggregate production planning is carried out several months prior to obtaining accurate demand estimates. In order to make decisions that are cognizant of uncertainty, we propose a stochastic programming model for the multi-scale DODS. Furthermore, we propose a modular algorithm motivated by the column generation decomposition strategy. The convergence of this modular algorithm is also demonstrated. Our experimental results demonstrate that the modular algorithm is robust in solving large-scale multi-scale DODS problems under uncertainty. Another main issue addressed in this dissertation is the application of the above modeling method and solution technique to decision aids for scheduling and hedging in a deregulated electricity market (DASH). The DASH model for power portfolio optimization provides a tool which helps decision-makers coordinate production decisions with opportunities in the wholesale power market. The methodology is based on a multi-scale DODS. This model selects portfolio positions for electricity and fuel forwards, while remaining cognizant of spot market prices, and generation costs. When compared with a commonly used fixed-mix policy, our experiments demonstrate that the DASH model provides significant advantages over fixed-mix policies. Finally, a multi-level distributed computing system is designed in a manner that implements the nested column generation decomposition approach for multi-scale decision-oriented dynamic systems based on a nested column generation decomposition approach. The implementation of a three-level distributed computing system is discussed in detail. The computational experiments are based on a large-scale real-world problem arising in power portfolio optimization. The deterministic equivalent LP for this instance with 200 scenarios has over one million constraints. Our computational results illustrate the effectiveness of this distributed computing approach.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Husain, Rashid, et Syed Muhammad Husnain Kazmi. « Comparative Analysis of Static Recovery Schemes for Distributed Computing ». Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4805.

Texte intégral
Résumé :
The primary objective of this thesis is to evaluate how grid computing works with their infrastructure. It also provides study to recognize the cases in which dynamic scheduler is preferable, how can the static recovery schemes play an effective role in large distributed system where load balancing is a key element and how can we get optimality in maximum number of crash computers using dynamic or static recovery schemes. This thesis consists of two parts: construction of Golomb and Trapezium modules, and performance comparison of Golomb and Trapezium recovery schemes with dynamic recovery scheme. In the first part we construct two modules that generate the recovery list of n computers, one for Golomb and one for Trapezium. In second part we make three schedulers, two for static recovery scheme and one for dynamic recovery scheme. In static recovery scheme we compare the performance of Golomb and Trapezium recovery scheme then we compare this performance with dynamic recovery scheme by using GridSim.
0046735991980, 0046766503096
Styles APA, Harvard, Vancouver, ISO, etc.
17

Hernandez, Jesus Israel. « Reactive scheduling of DAG applications on heterogeneous and dynamic distributed computing systems ». Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/2336.

Texte intégral
Résumé :
Emerging technologies enable a set of distributed resources across a network to be linked together and used in a coordinated fashion to solve a particular parallel application at the same time. Such applications are often abstracted as directed acyclic graphs (DAGs), in which vertices represent application tasks and edges represent data dependencies between tasks. Effective scheduling mechanisms for DAG applications are essential to exploit the tremendous potential of computational resources. The core issues are that the availability and performance of resources, which are already by their nature heterogeneous, can be expected to vary dynamically, even during the course of an execution. In this thesis, we first consider the problem of scheduling DAG task graphs onto heterogeneous resources with changeable capabilities. We propose a list-scheduling heuristic approach, the Global Task Positioning (GTP) scheduling method, which addresses the problem by allowing rescheduling and migration of tasks in response to significant variations in resource characteristics. We observed from experiments with GTP that in an execution with relatively frequent migration, it may be that, over time, the results of some task have been copied to several other sites, and so a subsequent migrated task may have several possible sources for each of its inputs. Some of these copies may now be more quickly accessible than the original, due to dynamic variations in communication capabilities. To exploit this observation, we extended our model with a Copying Management(CM) function, resulting in a new version, the Global Task Positioning with copying facilities (GTP/c) system. The idea is to reuse such copies, in subsequent migration of placed tasks, in order to reduce the impact of migration cost on makespan. Finally, we believe that fault tolerance is an important issue in heterogeneous and dynamic computational environments as the availability of resources cannot be guaranteed. To address the problem of processor failure, we propose a rewinding mechanism which rewinds the progress of the application to a previous state, thereby preserving the execution in spite of the failed processor(s). We evaluate our mechanisms through simulation, since this allow us to generate repeatable patterns of resource performance variation. We use a standard benchmark set of DAGs, comparing performance against that of competing algorithms from the scheduling literature.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Neal, Stephen. « A language for the dynamic verification of design patterns in distributed computing ». Thesis, University of Kent, 2001. https://kar.kent.ac.uk/13532/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Ramesh, Vasanth Kumar. « A game theoretic framework for dynamic task scheduling in distributed heterogeneous computing systems ». [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001115.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

de, Carvalho Tiago Filipe Rodrigues. « Integrated Approach to Dynamic and Distributed Cloud Data Center Management ». Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/739.

Texte intégral
Résumé :
Management solutions for current and future Infrastructure-as-a-Service (IaaS) Data Centers (DCs) face complex challenges. First, DCs are now very large infrastructures holding hundreds of thousands if not millions of servers and applications. Second, DCs are highly heterogeneous. DC infrastructures consist of servers and network devices with different capabilities from various vendors and different generations. Cloud applications are owned by different tenants and have different characteristics and requirements. Third, most DC elements are highly dynamic. Applications can change over time. During their lifetime, their logical architectures evolve and change according to workload and resource requirements. Failures and bursty resource demand can lead to unstable states affecting a large number of services. Global and centralized approaches limit scalability and are not suitable for large dynamic DC environments with multiple tenants with different application requirements. We propose a novel fully distributed and dynamic management paradigm for highly diverse and volatile DC environments. We develop LAMA, a novel framework for managing large scale cloud infrastructures based on a multi-agent system (MAS). Provider agents collaborate to advertise and manage available resources, while app agents provide integrated and customized application management. Distributing management tasks allows LAMA to scale naturally. Integrated approach improves its efficiency. The proximity to the application and knowledge of the DC environment allow agents to quickly react to changes in performance and to pre-plan for potential failures. We implement and deploy LAMA in a testbed server cluster. We demonstrate how LAMA improves scalability of management tasks such as provisioning and monitoring. We evaluate LAMA in light of state-of-the-art open source frameworks. LAMA enables customized dynamic management strategies to multi-tier applications. These strategies can be configured to respond to failures and workload changes within the limits of the desired SLA for each application.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Svärd, Petter. « Dynamic Cloud Resource Management : Scheduling, Migration and Server Disaggregation ». Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-87904.

Texte intégral
Résumé :
A key aspect of cloud computing is the promise of infinite, scalable resources, and that cloud services should scale up and down on demand. This thesis investigates methods for dynamic resource allocation and management of services in cloud datacenters, introducing new approaches as well as improvements to established technologies.Virtualization is a key technology for cloud computing as it allows several operating system instances to run on the same Physical Machine, PM, and cloud services normally consists of a number of Virtual Machines, VMs, that are hosted on PMs. In this thesis, a novel virtualization approach is presented. Instead of running each PM isolated, resources from multiple PMs in the datacenter are disaggregated and exposed to the VMs as pools of CPU, I/O and memory resources. VMs are provisioned by using the right amount of resources from each pool, thereby enabling both larger VMs than any single PM can host as well as VMs with tailor-made specifications for their application. Another important aspect of virtualization is live migration of VMs, which is the concept moving VMs between PMs without interruption in service. Live migration allows for better PM utilization and is also useful for administrative purposes. In the thesis, two improvements to the standard live migration algorithm are presented, delta compression and page transfer reordering. The improvements can reduce migration downtime, i.e., the time that the VM is unavailable, as well as the total migration time. Postcopy migration, where the VM is resumed on the destination before the memory content is transferred is also studied. Both userspace and in-kernel postcopy algorithms are evaluated in an in-depth study of live migration principles and performance.Efficient mapping of VMs onto PMs is a key problem for cloud providers as PM utilization directly impacts revenue. When services are accepted into a datacenter, a decision is made on which PM should host the service VMs. This thesis presents a general approach for service scheduling that allows for the same scheduling software to be used across multiple cloud architectures. A number of scheduling algorithms to optimize objectives like revenue or utilization are also studied. Finally, an approach for continuous datacenter consolidation is presented. As VM workloads fluctuate and server availability varies any initial mapping is bound to become suboptimal over time. The continuous datacenter consolidation approach adjusts this VM-to-PM mapping during operation based on combinations of management actions, like suspending/resuming PMs, live migrating VMs, and suspending/resuming VMs. Proof-of-concept software and a set of algorithms that allows cloud providers to continuously optimize their server resources are presented in the thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Helal, Manal Computer Science &amp Engineering Faculty of Engineering UNSW. « Indexing and partitioning schemes for distributed tensor computing with application to multiple sequence alignment ». Awarded by:University of New South Wales. Computer Science & ; Engineering, 2009. http://handle.unsw.edu.au/1959.4/44781.

Texte intégral
Résumé :
This thesis investigates indexing and partitioning schemes for high dimensional scientific computational problems. Building on the foundation offered by Mathematics of Arrays (MoA) for tensor-based computation, the ultimate contribution of the thesis is a unified partitioning scheme that works invariant of the dataset dimension and shape. Consequently, portability is ensured between different high performance machines, cluster architectures, and potentially computational grids. The Multiple Sequence Alignment (MSA) problem in computational biology has an optimal dynamic programming based solution, but it becomes computationally infeasible as its dimensionality (the number of sequences) increases. Even sub-optimal approximations may be unmanageable for more than eight sequences. Furthermore, no existing MSA algorithms have been formulated in a manner invariant over the number of sequences. This thesis presents an optimal distributed MSA method based on MoA. The latter offers a set of constructs that help represent multidimensional arrays in memory in a linear, concise and efficient way. Using MoA allows the partitioning of the dynamic programming algorithm to be expressed independently of dimension. MSA is the highest dimensional scientific problem considered for MoA-based partitioning to date. Two partitioning schemes are presented: the first is a master/slave approach which is based on both master/slave scheduling and slave/slave coupling. The second approach is a peer-to-peer design, in which the scheduling and dependency communication are calculated independently by each process, with no need for a master scheduler. A search space reduction technique is introduced to cater for the exponential expansion as the problem dimensionality increases. This technique relies on defining a hyper-diagonal through the tensor space, and choosing a band of neighbouring partitions around the diagonal to score. In contrast, other sub-optimal methods in the literature only consider projections on the surface of the hyper-cube. The resulting massively parallel design produces a scalable solution that has been implemented on high performance machines and cluster architectures. Experimental results for these implementations are presented for both simulated and real datasets. Comparisons between the reduced search space technique of this thesis with other sub-optimal methods for the MSA problem are presented.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Glacet, Christian. « Algorithmes de routage : de la réduction des coûts de communication à la dynamique ». Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00951393.

Texte intégral
Résumé :
Répondre à des requêtes de routage requiert que les entités du réseau, nommées routeurs, aient une connaissance à jour sur la topologie de celui-ci, cette connaissance est appelée table de routage. Le réseau est modélisé par un graphe dans lequel les noeuds représentent les routeurs, et les arêtes les liens de communication entre ceux ci.Cette thèse s'intéresse au calcul des tables de routage dans un modèle distribué.Dans ce modèle, les calculs sont effectués par un ensemble de processus placés sur les noeuds. Chaque processus a pour objectif de calculer la table de routage du noeud sur lequel il se trouve. Pour effectuer ce calcul les processus doivent communiquer entre eux. Dans des réseaux de grande taille, et dans le cadre d'un calcul distribué, le maintien à jour des tables de routage peut être coûteux en terme de communication. L'un des thèmes principaux abordés et celui de la réduction des coûts de communication lors de ce calcul. L'une des solutions apportées consisteà réduire la taille des tables de routage, permettant ainsi de réduire les coûts de communication. Cette stratégie classique dans le modèle centralisé est connue sous le nom de routage compact. Cette thèse présente notamment un algorithme de routage compact distribué permettant de réduire significativement les coûts de communication dans les réseaux tels que le réseau internet, i.e. le réseau des systèmes autonomes ainsi que dans des réseaux sans-échelle. Ce document contient également une étude expérimentale de différents algorithmes de routage compact distribués.Enfin, les problèmes liés à la dynamique du réseau sont également abordés. Plusprécisément le reste de l'étude porte sur un algorithme auto-stabilisant de calcul d'arbre de plus court chemin, ainsi que sur l'impact de la suppression de noeuds ou d'arêtes sur les tables de routage stockées aux routeurs.
Styles APA, Harvard, Vancouver, ISO, etc.
24

De, Grande Robson E. « Dynamic Load Balancing Schemes for Large-scale HLA-based Simulations ». Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23110.

Texte intégral
Résumé :
Dynamic balancing of computation and communication load is vital for the execution stability and performance of distributed, parallel simulations deployed on shared, unreliable resources of large-scale environments. High Level Architecture (HLA) based simulations can experience a decrease in performance due to imbalances that are produced initially and/or during run-time. These imbalances are generated by the dynamic load changes of distributed simulations or by unknown, non-managed background processes resulting from the non-dedication of shared resources. Due to the dynamic execution characteristics of elements that compose distributed simulation applications, the computational load and interaction dependencies of each simulation entity change during run-time. These dynamic changes lead to an irregular load and communication distribution, which increases overhead of resources and execution delays. A static partitioning of load is limited to deterministic applications and is incapable of predicting the dynamic changes caused by distributed applications or by external background processes. Due to the relevance in dynamically balancing load for distributed simulations, many balancing approaches have been proposed in order to offer a sub-optimal balancing solution, but they are limited to certain simulation aspects, specific to determined applications, or unaware of HLA-based simulation characteristics. Therefore, schemes for balancing the communication and computational load during the execution of distributed simulations are devised, adopting a hierarchical architecture. First, in order to enable the development of such balancing schemes, a migration technique is also employed to perform reliable and low-latency simulation load transfers. Then, a centralized balancing scheme is designed; this scheme employs local and cluster monitoring mechanisms in order to observe the distributed load changes and identify imbalances, and it uses load reallocation policies to determine a distribution of load and minimize imbalances. As a measure to overcome the drawbacks of this scheme, such as bottlenecks, overheads, global synchronization, and single point of failure, a distributed redistribution algorithm is designed. Extensions of the distributed balancing scheme are also developed to improve the detection of and the reaction to load imbalances. These extensions introduce communication delay detection, migration latency awareness, self-adaptation, and load oscillation prediction in the load redistribution algorithm. Such developed balancing systems successfully improved the use of shared resources and increased distributed simulations' performance.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Hari, Krishnan Prem Kumar. « Design and Analysis of a Dynamic SpaceWire Routing Protocol for Reconfigurable and Distributed On-Board Computing Systems ». Thesis, Luleå tekniska universitet, Rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76534.

Texte intégral
Résumé :
Future spacecrafts will require more computational and processing power to keep up with the growing demand in requirements and complexity. ScOSA is the next generation on-board computer developed by the German Aerospace Centre (DLR). The main motivation behind ScOSA is to replace the conventional on-board computer with distributed and reconfigurable computing nodes which provides higher performance, reliability, availability and stability by using a combination of the COTS components and reliable computing processors that are space qualified. In the current ScOSA system reconfiguration and routing of data between nodes are based on a static decision graph. SpaceWire protocol is used to communicate between nodes to provide reliability. The focus of the thesis is to design and implement a dynamic routing protocol for ScOSA which can be used in future for not only communicating between the nodes but also for reconfiguration. SpaceWire IPC is a customized protocol developed by DLR to provide communication between the nodes in a distributed network and to support monitoring, management and reconfiguration services. The dynamic routing protocol proposed in this thesis is primarily derived from the monitoring mechanism used in the SpaceWire IPC. PULL type monitoring mechanism is modelled and simulated using OMNeT++. The results obtained provide a qualitative outlook of the dynamic routing protocol implemented.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Balasubramaniam, Mahadevan. « Performance analysis and evaluation of dynamic loop scheduling techniques in a competitive runtime environment for distributed memory architectures ». Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04022003-154254.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Kotto, Kombi Roland. « Distributed query processing over fluctuating streams ». Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI050/document.

Texte intégral
Résumé :
Le traitement de flux de données est au cœur des problématiques actuelles liées au Big Data. Face à de grandes quantités de données (Volume) accessibles de manière éphémère (Vélocité), des solutions spécifiques tels que les systèmes de gestion de flux de données (SGFD) ont été développés. Ces SGFD reçoivent des flux et des requêtes continues pour générer de nouveaux résultats aussi longtemps que des données arrivent en entrée. Dans le contexte de cette thèse, qui s’est réalisée dans le cadre du projet ANR Socioplug (ANR-13-INFR-0003), nous considérons une plateforme collaborative de traitement de flux de données à débit variant en termes de volume et de distribution des valeurs. Chaque utilisateur peut soumettre des requêtes continues et contribue aux ressources de traitement de la plateforme. Cependant, chaque unité de traitement traitant les requêtes dispose de ressources limitées ce qui peut engendrer la congestion du système en fonction des variations des flux en entrée. Le problème est alors de savoir comment adapter dynamiquement les ressources utilisées par chaque requête continue par rapport aux besoins de traitement. Cela soulève plusieurs défis : i) comment détecter un besoin de reconfiguration ? ii) quand reconfigurer le système pour éviter sa congestion ? Durant ces travaux de thèse, nous nous sommes intéressés à la gestion automatique de la parallélisation des opérateurs composant une requête continue. Nous proposons une approche originale basée sur une estimation des besoins de traitement dans un futur proche. Ainsi, nous pouvons adapter le niveau de parallélisme des opérateurs de manière proactive afin d’ajuster les ressources utilisées aux besoins des traitements. Nous montrons qu’il est possible d’éviter la congestion du système mais également de réduire significativement la consommation de ressources à performance équivalente. Ces différents travaux ont été implémentés et validés dans un SGFD largement utilisé avec différents jeux de tests reproductibles
In a Big Data context, stream processing has become a very active research domain. In order to manage ephemeral data (Velocity) arriving at important rates (Volume), some specific solutions, denoted data stream management systems (DSMSs),have been developed. DSMSs take as inputs some queries, called continuous queries,defined on a set of data streams. Acontinuous query generates new results as long as new data arrive in input. In many application domains, data streams haveinput rates and distribution of values which change over time. These variations may impact significantly processingrequirements for each continuous query.This thesis takes place in the ANR project Socioplug (ANR-13-INFR-0003). In this context, we consider a collaborative platformfor stream processing. Each user can submit multiple continuous queries and contributes to the execution support of theplatform. However, as each processing unit supporting treatments has limited resources in terms of CPU and memory, asignificant increase in input rate may cause the congestion of the system. The problem is then how to adjust dynamicallyresource usage to processing requirements for each continuous query ? It raises several challenges : i) how to detect a need ofreconfiguration ? ii) when reconfiguring the system to avoid its congestion at runtime ?In this work, we are interested by the different processing steps involved in the treatment of a continuous query over adistributed infrastructure. From this global analysis, we extract mechanisms enabling dynamic adaptation of resource usage foreach continuous query. We focus on automatic parallelization, or auto-parallelization, of operators composing the executionplan of a continuous query. We suggest an original approach based on the monitoring of operators and an estimation ofprocessing requirements in near future. Thus, we can increase (scale-out), or decrease (scale-in) the parallelism degree ofoperators in a proactive many such as resource usage fits to processing requirements dynamically. Compared to a staticconfiguration defined by an expert, we show that it is possible to avoid the congestion of the system in many cases or to delay itin most critical cases. Moreover, we show that resource usage can be reduced significantly while delivering equivalentthroughput and result quality. We suggest also to combine this approach with complementary mechanisms for dynamic adaptation of continuous queries at runtime. These differents approaches have been implemented within a widely used DSMS and have been tested over multiple and reproductible micro-benchmarks
Styles APA, Harvard, Vancouver, ISO, etc.
28

Subbiah, Arun. « Design and evaluation of a distributed diagnosis algorithm for arbitrary network topologies in dynamic fault environments ». Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13273.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Edirisinghe, Pathirannehelage Neranjan S. « Charge Transfer in Deoxyribonucleic Acid (DNA) : Static Disorder, Dynamic Fluctuations and Complex Kinetic ». Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/phy_astr_diss/45.

Texte intégral
Résumé :
The fact that loosely bonded DNA bases could tolerate large structural fluctuations, form a dissipative environment for a charge traveling through the DNA. Nonlinear stochastic nature of structural fluctuations facilitates rich charge dynamics in DNA. We study the complex charge dynamics by solving a nonlinear, stochastic, coupled system of differential equations. Charge transfer between donor and acceptor in DNA occurs via different mechanisms depending on the distance between donor and acceptor. It changes from tunneling regime to a polaron assisted hopping regime depending on the donor-acceptor separation. Also we found that charge transport strongly depends on the feasibility of polaron formation. Hence it has complex dependence on temperature and charge-vibrations coupling strength. Mismatched base pairs, such as different conformations of the G・A mispair, cause only minor structural changes in the host DNA molecule, thereby making mispair recognition an arduous task. Electron transport in DNA that depends strongly on the hopping transfer integrals between the nearest base pairs, which in turn are affected by the presence of a mispair, might be an attractive approach in this regard. I report here on our investigations, via the I –V characteristics, of the effect of a mispair on the electrical properties of homogeneous and generic DNA molecules. The I –V characteristics of DNA were studied numerically within the double-stranded tight-binding model. The parameters of the tight-binding model, such as the transfer integrals and on-site energies, are determined from first-principles calculations. The changes in electrical current through the DNA chain due to the presence of a mispair depend on the conformation of the G・A mispair and are appreciable for DNA consisting of up to 90 base pairs. For homogeneous DNA sequences the current through DNA is suppressed and the strongest suppression is realized for the G(anti)・A(syn) conformation of the G・A mispair. For inhomogeneous (generic) DNA molecules, the mispair result can be either suppression or an enhancement of the current, depending on the type of mispairs and actual DNA sequence.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Vijayakumar, Smita. « A Framework for Providing Automatic Resource and Accuracy Management in a Cloud Environment ». The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1274194090.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Villebonnet, Violaine. « Scheduling and Dynamic Provisioning for Energy Proportional Heterogeneous Infrastructures ». Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEN057/document.

Texte intégral
Résumé :
La consommation énergétique des centres de calculs et de données, aussi appelés « data centers », représentait 2% de la consommation mondiale d'électricité en 2012. Leur nombre est en augmentation et suit l'évolution croissante des objets connectés, services, applications, et des données collectées. Ces infrastructures, très consommatrices en énergie, sont souvent sur-dimensionnées et les serveurs en permanence allumés. Quand la charge de travail est faible, l'électricité consommée par les serveurs inutilisés est gaspillée, et un serveur inactif peut consommer jusqu'à la moitié de sa consommation maximale. Cette thèse s'attaque à ce problème en concevant un data center ayant une consommation énergétique proportionnelle à sa charge. Nous proposons un data center hétérogène, nommé BML pour « Big, Medium, Little », composé de plusieurs types de machines : des processeurs très basse consommation et des serveurs classiques. L'idée est de profiter de leurs différentes caractéristiques de performance, consommation, et réactivité d'allumage, pour adapter dynamiquement la composition de l'infrastructure aux évolutions de charge. Nous décrivons une méthode générique pour calculer les combinaisons de machines les plus énergétiquement efficaces à partir de données de profilage de performance et d'énergie acquis expérimentalement considérant une application cible, ayant une charge variable au cours du temps, dans notre cas un serveur web.Nous avons développé deux algorithmes prenant des décisions de reconfiguration de l'infrastructure et de placement des instances de l'application en fonction de la charge future. Les différentes temporalités des actions de reconfiguration ainsi que leur coûts énergétiques sont pris en compte dans le processus de décision. Nous montrons par simulations que nous atteignons une consommation proportionnelle à la charge, et faisons d'importantes économies d'énergie par rapport aux gestions classiques des data centers
The increasing number of data centers raises serious concerns regarding their energy consumption. These infrastructures are often over-provisioned and contain servers that are not fully utilized. The problem is that inactive servers can consume as high as 50% of their peak power consumption.This thesis proposes a novel approach for building data centers so that their energy consumption is proportional to the actual load. We propose an original infrastructure named BML for "Big, Medium, Little", composed of heterogeneous computing resources : from low power processors to classical servers. The idea is to take advantage of their different characteristics in terms of energy consumption, performance, and switch on reactivity to adjust the composition of the infrastructure according to the load evolutions. We define a generic methodology to compute the most energy proportional combinations of machines based on hardware profiling data.We focus on web applications whose load varies over time and design a scheduler that dynamically reconfigures the infrastructure, with application migrations and machines switch on and off, to minimize the infrastructure energy consumption according to the current application requirements.We have developed two different dynamic provisioning algorithms which take into account the time and energy overheads of the different reconfiguration actions in the decision process. We demonstrate through simulations based on experimentally acquired hardware profiles that we achieve important energy savings compared to classical data center infrastructures and management
Styles APA, Harvard, Vancouver, ISO, etc.
32

Tesser, Rafael Keller. « A simulation workflow to evaluate the performance of dynamic load balancing with over decomposition for iterative parallel applications ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/180129.

Texte intégral
Résumé :
Nesta tese é apresentado um novo workflow de simulação para avaliar o desempenho do balanceamento de carga dinâmico baseado em sobre-decomposição aplicado a aplicações paralelas iterativas. Seus objetivos são realizar essa avaliação com modificações mínimas da aplicação e a baixo custo em termos de tempo e de sua necessidade de recursos computacionais. Muitas aplicações paralelas sofrem com desbalanceamento de carga dinâmico (temporal) que não pode ser tratado a nível de aplicação. Este pode ser causado por características intrínsecas da aplicação ou por fatores externos de hardware ou software. Como demonstrado nesta tese, tal desbalanceamento é encontrado mesmo em aplicações cujo código não aparenta qualquer dinamismo. Portanto, faz-se necessário utilizar mecanismo de balanceamento de carga dinâmico a nível de runtime. Este trabalho foca no balanceamento de carga dinâmico baseado em sobre-decomposição. No entanto, avaliar e ajustar o desempenho de tal técnica pode ser custoso. Isso geralmente requer modificações na aplicação e uma grande quantidade de execuções para obter resultados estatisticamente significativos com diferentes combinações de parâmetros de balanceamento de carga Além disso, para que essas medidas sejam úteis, são usualmente necessárias grandes alocações de recursos em um sistema de produção. Simulated Adaptive MPI (SAMPI), nosso workflow de simulação, emprega uma combinação de emulação sequencial e replay de rastros para reduzir os custos dessa avaliação. Tanto emulação sequencial como replay de rastros requerem um único nó computacional. Além disso, o replay demora apenas uma pequena fração do tempo de uma execução paralela real da aplicação. Adicionalmente à simulação de balanceamento de carga, foram desenvolvidas técnicas de agregação espacial e rescaling a nível de aplicação, as quais aceleram o processo de emulação. Para demonstrar os potenciais benefícios do balanceamento de carga dinâmico com sobre-decomposição, foram avaliados os ganhos de desempenho empregando essa técnica a uma aplicação iterativa paralela da área de geofísica (Ondes3D). Adaptive MPI (AMPI) foi utilizado para prover o suporte a balanceamento de carga dinâmico, resultando em ganhos de desempenho de até 36.58% em 288 cores de um cluster Essa avaliação também é usada pra ilustrar as dificuldades encontradas nesse processo, assim justificando o uso de simulação para facilitá-la. Para implementar o workflow SAMPI, foi utilizada a interface SMPI do simulador SimGrid, tanto no modo de emulação, como no de replay de rastros. Para validar esse simulador, foram comparadas execuções simuladas (SAMPI) e reais (AMPI) da aplicação Ondes3D. As simulações apresentaram uma evolução do balanceamento de carga bastante similar às execuções reais. Adicionalmente, SAMPI estimou com sucesso a melhor heurística de balanceamento de carga para os cenários testados. Além dessa validação, nesta tese é demonstrado o uso de SAMPI para exploração de parâmetros de balanceamento de carga e para planejamento de capacidade computacional. Quanto ao desempenho da simulação, estimamos que o workflow completo é capaz de simular a execução do Ondes3D com 24 combinações de parâmetros de balanceamento de carga em 5 horas para o nosso cenário de terremoto mais pesado e 3 horas para o mais leve.
In this thesis we present a novel simulation workflow to evaluate the performance of dynamic load balancing with over-decomposition applied to iterative parallel applications at low-cost. Its goals are to perform such evaluation with minimal application modification and at a low cost in terms of time and of resource requirements. Many parallel applications suffer from dynamic (temporal) load imbalance that can not be treated at the application level. It may be caused by intrinsic characteristics of the application or by external software and hardware factors. As demonstrated in this thesis, such dynamic imbalance can be found even in applications whose codes do not hint at any dynamism. Therefore, we need to rely on runtime dynamic load balancing mechanisms, such as dynamic load balancing based on over-decomposition. The problem is that evaluating and tuning the performance of such technique can be costly. This usually entails modifications to the application and a large number of executions to get statistically sound performance measurements with different load balancing parameter combinations. Moreover, useful and accurate measurements often require big resource allocations on a production cluster. Our simulation workflow, dubbed Simulated Adaptive MPI (SAMPI), employs a combined sequential emulation and trace-replay simulation approach to reduce the cost of such an evaluation Both sequential emulation and trace-replay require a single computer node. Additionally, the trace-replay simulation lasts a small fraction of the real-life parallel execution time of the application. Besides the basic SAMPI simulation, we developed spatial aggregation and applicationlevel rescaling techniques to speed-up the emulation process. To demonstrate the real-life performance benefits of dynamic load balance with over-decomposition, we evaluated the performance gains obtained by employing this technique on a iterative parallel geophysics application, called Ondes3D. Dynamic load balancing support was provided by Adaptive MPI (AMPI). This resulted in up to 36.58% performance improvement, on 288 cores of a cluster. This real-life evaluation also illustrates the difficulties found in this process, thus justifying the use of simulation. To implement the SAMPI workflow, we relied on SimGrid’s Simulated MPI (SMPI) interface in both emulation and trace-replay modes.To validate our simulator, we compared simulated (SAMPI) and real-life (AMPI) executions of Ondes3D. The simulations presented a load balance evolution very similar to real-life and were also successful in choosing the best load balancing heuristic for each scenario. Besides the validation, we demonstrate the use of SAMPI for load balancing parameter exploration and for computational capacity planning. As for the performance of the simulation itself, we roughly estimate that our full workflow can simulate the execution of Ondes3D with 24 different load balancing parameter combinations in 5 hours for our heavier earthquake scenario and in 3 hours for the lighter one.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Bartoň, Radek. « Modernizace GIS systému GRASS ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235885.

Texte intégral
Résumé :
The geographical information system GRASS has become a standard on the field of geographical phenomenon modeling during its 26 years old lifetime. However, its internal structure follows practices from the date of its creation. This thesis aims to design a possible shape of internal parts modernization using a component architecture and object-oriented design patterns with distributed computing and dynamic languages support in mind. The designed system should stay identical from the user's point-of-view. Design results are proven on a prototype library implementation called the GAL Framework.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kwon, Young Woo. « Effective Fusion and Separation of Distribution, Fault-Tolerance, and Energy-Efficiency Concerns ». Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/49386.

Texte intégral
Résumé :
As software applications are becoming increasingly distributed and mobile, their design and implementation are characterized by distributed software architectures, possibility of faults, and the need for energy awareness. Thus, software developers should be able to simultaneously reason about and handle the concerns of distribution, fault-tolerance, and energy-efficiency. Being closely intertwined, these concerns can introduce significant complexity into the design and implementation of modern software. In other words, to develop reliable and energy-efficient applications, software developers must understand how distribution, fault-tolerance, and energy-efficiency interplay with each other and how to implement these concerns while keeping the complexity in check. This dissertation addresses five technical issues that stand on the way of engineering reliable and energy-efficient software: (1) how can developers select and parameterize middleware to achieve the requisite levels of performance, reliability, and energy-efficiency? (2) how can one streamline the process of implementing and reusing fault tolerance functionality in distributed applications? (3) can automated techniques be developed to help transition centralized applications to using cloud-based services efficiently and reliably? (4) how can one leverage cloud-based resources to improve the energy-efficiency of mobile applications? (5) how can middleware be adapted to improve the energy-efficiency of distributed mobile applications operated over heterogeneous mobile networks? To address these issues, this research studies the concerns of distribution, fault-tolerance, and energy-efficiency as well as their interaction. It also develops novel approaches, techniques, and tools that effectively fuse and separate these concerns as required by particular software development scenarios. The specific innovations include (1) a systematic assessment of the performance, conciseness, complexity, reliability, and energy consumption of middleware mechanisms for accessing remote functionality, (2) a declarative approach to hardening distributed applications with resiliency against partial failure, (3) cloud refactoring, a set of automated program transformations for transitioning to using cloud-based services efficiently and reliably, (4) a cloud offloading approach that improves the energy-efficiency of mobile applications without compromising their reliability, (5) a middleware mechanism that optimizes energy consumption by adapting execution patterns dynamically in response to fluctuations in network conditions.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Xiong, Pengcheng. « Dynamic monitoring, modeling and management of performance and resources for applications in cloud ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45779.

Texte intégral
Résumé :
Emerging trends in Cloud computing bring numerous benefits, such as higher performance, fast and flexible provisioning of applications and capacities, lower infrastructure costs, and almost unlimited scalability. However, the increasing complexity of automated performance and resource management for applications in Cloud computing presents novel challenges that demand enhancement to classical control-based approaches. An important challenge that Cloud service providers often face is a resource sharing dilemma under workload variation. Cloud service providers pursue higher resource utilization, because the higher the utilization, the lower the hardware cost, operating cost and maintenance cost. On the other hand, resource utilizations cannot be too high or the service provider's revenue could be jeopardized due to the inability to meet application-level service-level objectives (SLOs). A crucial research question is how to generate as much revenue as possible by satisfying service-level agreements while reducing costs as much as possible in order to maximize the profit for Cloud service providers. To this end, the classical control-based approaches show great potential to address the resource sharing dilemma, which could be classified into three major categories, i.e., admission control, queueing and scheduling, and resource allocation. However, it is a challenging task to apply classical control-based approaches directly to computer systems, where first-principle models are generally not available. It becomes even more difficult due to the dynamics seen in real computer systems including workload variations, multi-tier dependencies, and resource bottleneck shifts. Fundamentally, the main contributions of this thesis are the efforts to enhance classical control-based approaches by leveraging other techniques to address the increasing complexity of automated performance and resource management in the Cloud through dynamic monitoring, modeling and management of performance and resources. More specifically, (1) an admission control approach is enhanced by leveraging decision theory to achieve the most profitable service-level compliance; (2) a critical resource identification approach is enhanced by leveraging statistical machine learning to automatically and adaptively identify critical resources; and (3) a resource allocation approach is enhanced by leveraging hierarchical resource management to achieve the highest resource utilization. Concretely, the enhanced control-based approaches are implemented in a collection of real control systems: ActiveSLA, vPerfGuard and ERController. The control systems are applied to different real applications, such as OLTP and OLAP database applications and distributed multi-tier web applications, with different workload intensities, type and mix, in different Cloud environments. All the experimental results show that the prototype control systems outperform existing classical control-based approaches. Finally, this thesis opens new avenues to address the increasing complexity of automated performance and resource management through enhancement of classical control-based approaches in Cloud environments. Future work will consistently follow the direction of new avenues to address the new challenges that arise with the advent of new hardware technology, new software frameworks and new computing paradigms.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Sun, Yi. « High Performance Simulation of DEVS Based Large Scale Cellular Space Models ». Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_diss/40.

Texte intégral
Résumé :
Cellular space modeling is becoming an increasingly important modeling paradigm for modeling complex systems with spatial-temporal behaviors. The growing demand for cellular space models has directed researchers to use different modeling formalisms, among which Discrete Event System Specification (DEVS) is widely used due to its formal modeling and simulation framework. The increasing complexity of systems to be modeled asks for cellular space models with large number of cells for modeling the systems¡¯ spatial-temporal behavior. Improving simulation performance becomes crucial for simulating large scale cellular space models. In this dissertation, we proposed a framework for improving simulation performance for large scale DEVS-based cellular space models. The framework has a layered structure, which includes modeling, simulation, and network layers corresponding to the DEVS-based modeling and simulation architecture. Based on this framework, we developed methods at each layer to overcome performance issues for simulating large scale cellular space models. Specifically, to increase the runtime and memory efficiency for simulating large number of cells, we applied Dynamic Structure DEVS (DSDEVS) to cellular space modeling and carried out comprehensive performance measurement. DSDEVS improves simulation performance by making the simulation focus only on those active models, and thus be more efficient than when the entire cellular space is loaded. To reduce the number of simulation cycles caused by extensive message passing among cells, we developed a pre-schedule modeling approach that exploits the model behavior for improving simulation performance. At the network layer, we developed a modified time-warp algorithm that supports parallel simulation of DEVS-based cellular space models. The developed methods have been applied to large scale wildfire spread simulations based on the DEVS-FIRE simulation environment and have achieved significant performance results.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Dodonov, Evgueni. « Uma abordagem de predição da dinâmica comportamental de processos para prover autonomia a ambientes distribuídos ». Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-05082009-205709/.

Texte intégral
Résumé :
A evolução de sistemas distribuídos resultou em aumento significativo de complexidade para manutenção e gerenciamento, tornando pouco eficientes técnicas convencionais baseadas em intervenções manuais. Isso motivou pesquisas que deram origem ao paradigma de computação autônoma (Autonomic Computing), que provê aspectos de auto-configuração, auto-recuperação, auto-otimização e auto-proteção a fim de tornar sistemas auto-gerenciáveis. Nesse contexto, esta tese teve como objetivo prover autonomia a ambientes distribuídos, sem a necessidade de mudar o paradigma de programação e as aplicações de usuários. Para isso, propôs-se uma abordagem que emprega técnicas para compreensão e predição de dinâmicas comportamentais de processos, utilizando abordagens de sistemas dinâmicos, inteligência artificial e teoria do caos. Os estudos realizados no decorrer desta pesquisa demonstraram que, ao predizer padrões comportamentais, pode-se otimizar diversos aspectos de computação distribuída, suportando tomadas de decisão autônomas pelos ambientes. Para validar a abordagem proposta, foi desenvolvida uma política de escalonamento distribuído, denominada PredRoute, a qual utiliza o conhecimento sobre o comportamento de processos para otimizar, transparentemente, a alocação de recursos. Experimentos realizados demonstraram que essa política aumenta o desempenho em até 4 ordens de grandeza e apresenta baixo custo computacional, o que permite a sua adoção para escalonamento online de processos
The evolution of distributed systems resulted in a significant growth in management and support complexities, which uncovered the inefficiencies incurred by the usage of conventional management techniques, based in manual interventions. This, therefore, has motivated researches towards the concept of Autonomic Computing, which provides aspects of self-configuration, self-healing, self-optimization and self-protection, aiming at developing computer systems capable of self-management. In this context, this thesis was conceived with the goal of providing autonomy to distributed systems, without changing the programming paradigm or user applications. In order to reach this goal, we proposed an approach which employs techniques capable of modelling and predicting the dynamics of application behavior, using concepts introduced in dynamical systems, artificial intelligence, and chaos theory. The obtained results demonstrated that it is possible to optimize several aspects of distributed computing, providing support for autonomic computing capabilities to distributed environments. In order to validate the proposed approach, a distributed scheduling policy was developed, named PredRoute, which uses the knowledge about the process behavior to transparently optimize the resource allocation. Experimental results demonstrated that this policy can improve the system performance by up to a power of 4, and also requires a considerably low computational cost, which suggests its adoption for online process scheduling in distributed environments
Styles APA, Harvard, Vancouver, ISO, etc.
38

Meslmawy, Mahdi Abed Salman. « Efficient ressources management in a ditributed computer system, modeled as a dynamic complex system ». Thesis, Le Havre, 2015. http://www.theses.fr/2015LEHA0007/document.

Texte intégral
Résumé :
Les grilles et les clouds sont deux types aujourd'hui largement répandus de systèmes informatiques distribués (en anglais DCS). Ces DCS sont des systèmes complexes au sens où leur comportement global émergent résulte de l'interaction décentralisée de ses composants, et n'est pas guidée directement de manière centralisée. Dans notre étude, nous présentons un modèle de système complexe qui gère de manière la plus efficace possible les ressources d'un DCS. Les entités d'un DCS réagissent à l'instabilité du système et s'ajustent aux conditions environnementales pour optimiser la performance du système. La structure des réseaux d'interaction qui permettent un accès rapide et sécurisé aux ressources disponibles est étudiée, et des améliorations proposées
Grids and clouds are types of currently widely known distributed computing systems or DCSs. DCSs are complex systems in the sense that their emergent global behavior results from decentralized interaction of its parts and is not guided directly from a central point. In our study, we present a complex system model that efficiently manages the ressources of a DCS. The entities of the DCS react to system instability and adjust their environmental condtions for optimizing system performance. The structure of the interaction networks that allow fast and reliable access to available resources is studied and improvements ar proposed
Styles APA, Harvard, Vancouver, ISO, etc.
39

Neto, Benedito Josà de Almeida. « SYSSU-DTS : um sistema de suporte à computaÃÃo ubÃqua baseado em espaÃo de tuplas distribuÃdo ». Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=12677.

Texte intégral
Résumé :
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
nÃo hÃ
A evoluÃÃo das tecnologias mÃveis favorece o surgimento de sistemas capazes de antever as necessidades do usuÃrio e se adaptar Ãs variaÃÃes de seu contexto de forma imperceptÃvel. Tais sistemas, denominados sistemas ubÃquos, enfrentam o desafio da adaptaÃÃo dinÃmica em um cenÃrio altamente distribuÃdo, heterogÃneo e volÃtil, uma vez que pode se tornar difÃcil coletar e processar informaÃÃes contextuais oriundas de fontes desconhecidas e distribuÃdas. O problema em questÃo à o gerenciamento de dados contextuais em cenÃrios sujeitos a mobilidade e conexÃes intermitentes entre dispositivos mÃveis e servidores. A fim de facilitar o desenvolvimento de sistemas ubÃquos, este trabalho estende um sistema de suporte existente, chamado SysSU (LIMA et al., 2011), que foi baseado em espaÃos de tuplas centralizado. Com o objetivo de gerenciar informaÃÃes de contexto distribuÃdas, à adotada uma abordagem de espaÃo de tuplas descentralizada, oferecendo aos componentes dos sistemas ubÃquos a capacidade de interaÃÃo e cooperaÃÃo em situaÃÃes de total descentralizaÃÃo. Sendo assim, esta dissertaÃÃo propÃe o SysSU-DTS (System Support for Ubiquity - Distribute Tuple Space), um sistema de suporte que fornece a funcionalidade de coordenaÃÃo de sistemas ubÃquos em ambientes abertos, onde nenhuma suposiÃÃo sobre os recursos disponÃveis deve ser feita. O SysSU-DTS à focado em sistemas ubÃquos baseado em dispositivos mÃveis, como smartphones, tablets e ultrabooks, que podem se comunicar atravÃs de redes mÃveis Ad hoc (MANET - Mobile Ad hoc Network). O SysSU-DTS representa informaÃÃes contextuais por meio de tuplas e permite o acesso transparente a informaÃÃes de contexto disponÃveis, estejam elas localizadas dentro do dispositivo mÃvel, em um servidor ou em outro dispositivo mÃvel prÃximo. A partir do acesso a informaÃÃes de contexto oriundas de diferentes provedores, as aplicaÃÃes ubÃquas e sensÃveis ao contexto que adotem o suporte do SysSU-DTS podem ter uma visÃo do contexto global das entidades envolvidas no sistema. AlÃm disso, o SysSU-DTS implementa um mecanismo de escopo que permite a formaÃÃo de subconjuntos de informaÃÃes contextuais disponÃveis, evitando gerenciamento de informaÃÃes desnecessÃrias. SÃo apresentados resultados experimentais obtidos em uma avaliaÃÃo de desempenho realizada em um testbed composto por smartphones e tablets. Esta avaliaÃÃo demonstra a viabilidade prÃtica da abordagem proposta e como o SysSU-DTS promove a distribuiÃÃo de informaÃÃes de contexto adaptando-se dinamicamente a provedores de contexto locais, infra-estruturados e distribuÃdos em redes Ad hoc.
The evolution of mobile technologies allows the emerging of ubiquitous systems, able to anticipate userâs needs and to seamlessly adapt to context changes. These systems present the problem of dynamic adaptation in a highly distributed, heterogeneous and volatile environment, since it may be difficult to collect and process context information from distributed unknown sources. The problem faced is the management of contextual data in scenarios with mobility and intermittent connections between mobile devices and servers. In order to facilitate the development of such systems, this work extends an existing support system based on centralized tuple spaces, called SysSU (LIMA et al., 2011), aiming at the management of distributed information. Hence, a decentralized tuple space approach is adopted, offering to ubiquitous systems components the capability of interaction and cooperation in scenarios of total decentralization. Thus, this work introduces SysSU-DTS (System Support for Ubiquity - Distribute Tuple Space), a system support that provides functionality for coordinating ubiquitous systems in open environments, where no assumptions about available resources should be made. It focuses on ubiquitous systems based on mobile devices such as smartphones, tablets and ultrabooks, which can communicate through a Mobile Ad hoc Network (MANET). SysSU-DTS represents context information by tuples and allows a transparent access to spread context, as follows: (i)local access, which accesses an internal device tuple space; (ii) infrastructured access, tuple spaces located on a server accessed using an infrastructured network; or (iii) Ad hoc access, interacting directly with tuple spaces located in nearby devices via the formation of an Ad hoc network. From the access to different context providers, ubiquitous and context-aware applications, using SysSU-DTSs support, can have an insight of global context related to the system entities. In addition, SysSU-DTS implements a scope mechanism that allows the formation of available contextual information subsets. This mechanism restricts access to contextual tuples only to members of the same scope, avoiding unnecessary information management. This dissertation reports some experimental results obtained in a performance evaluation using a testbed of smartphones and tablets. The evaluation shows the practical feasibility of our approach and point out how SysSU-DTS can grant context data distribution with dynamically adapting to local, infrastructured and distributed over Ad hoc networks context providers.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Cyriac, Aiswarya, et Aiswarya Cyriac. « Verification of communicating recursive programs via split-width ». Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2014. http://tel.archives-ouvertes.fr/tel-01015561.

Texte intégral
Résumé :
This thesis investigates automata-theoretic techniques for the verification of physically distributed machines communicating via unbounded reliable channels. Each of these machines may run several recursive programs (multi-threading). A recursive program may also use several unbounded stack and queue data-structures for its local-computation needs. Such real-world systems are so powerful that all verification problems become undecidable. We introduce and study a new parameter called split-width for the under-approximate analysis of such systems. Split-width is the minimum number of splits required in the behaviour graphs to obtain disjoint parts which can be reasoned about independently. Thus it provides a divide-and-conquer approach for their analysis. With the parameter split-width, we obtain optimal decision procedures for various verification problems on these systems like reachability, inclusion, etc. and also for satisfiability and model checking against various logical formalisms such as monadic second-order logic, propositional dynamic logic and temporal logics. It is shown that behaviours of a system have bounded split-width if and only if they have bounded clique-width. Thus, by Courcelle's results on uniformly bounded-degree graphs, split-width is not only sufficient but also necessary to get decidability for MSO satisfiability checking. We then study the feasibility of distributed controllers for our generic distributed systems. We propose several controllers, some finite state and some deterministic, which ensure that the behaviours of the system have bounded split-width. Such a distributedly controlled system yields decidability for the various verification problems by inheriting the optimal decision procedures for split-width. These also extend or complement many known decidable subclasses of systems studied previously.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Shoker, Ali. « Byzantine fault tolerance from static selection to dynamic switching ». Toulouse 3, 2012. http://thesesups.ups-tlse.fr/1924/.

Texte intégral
Résumé :
La Tolérance aux pannes Byzantines (BFT) est de plus en plus crucial avec l'évolution d'applications et en raison de la croissance de l'innovation technologique en informatique. Bien que des dizaines de protocoles BFT aient été introduites dans les années précédentes, leur mise en œuvre ne semble pas satisfaisant. Pour faire face à cette complexité, due à la dependence d'un protocol d'une situation, nous tentons une approche qui permettra de sélectionner un protocole en fonction d'une situation. Ceci nous paraît, en s'inspirant de tout système d'encrage, comme une démarche nécessaire pour aborder la problématique de la BFT. Dans cette thèse, nous introduisons un modèle de sélection ainsi que l'algorithme qui permet de simplifier et d'automatiser le processus d'élection d'un protocole. Ce mécanisme est conçu pour fonctionner selon 3 modes : statique, dynamique et heuristique. Les deux derniers modes, nécessitent l'introduction d'un système réactif, nous ont conduits à présenter un nouveau modèle BFT : Adapt. Il réagit à tout changement et effectue, d'une manière adaptée, la commutation entre les protocoles d'une façon dynamique. Le mode statique permet aux utilisateurs de BFT de choisir un protocole BFT en une seule fois. Ceci est très utile dans les services Web et les " Clouds " où le BFT peut être fournit comme un service inclut dans le contrat (SLA). Ce mode est essentiellement conçu pour les systèmes qui n'ont pas trop d'états fluctuants. Pour ce faire, un processus d'évaluation est en charge de faire correspondre, à priori, les préférences de l'utilisateur aux profils du protocole BFT nommé, en fonction des critères de fiabilité et de performance. Le protocole choisi est celui qui réalise le meilleur score d'évaluation. Le mécanisme est bien automatisé à travers des matrices mathématiques, et produit des sélections qui sont raisonnables. D'autres systèmes peuvent cependant avoir des conditions flottantes, il s'agit de la variation des charges ou de la taille de message qui n'est pas fixe. Dans ce cas, le mode statique ne peut continuer à être efficace et risque de ne pas pouvoir s'adapter aux nouvelles conditions. D'où la nécessité de trouver un moyen permettant de répondre aux nouvelles exigences d'une façon dynamique. Adapt combine un ensemble de protocoles BFT ainsi que leurs mécanismes de commutation pour assurer l'adaptation à l'évolution de l'état du système. Par conséquent, le "Meilleur" protocole est toujours sélectionné selon l'état du système. On obtient ainsi une qualité optimisée de service, i. E. , la fiabilité et la performance. Adapt contrôle l'état du système grâce à ses mécanismes d'événements, et utilise une méthode de "Support Vecor Regrssion" pour conduire aux prédictions en temps réel pour l'exécution des protocoles (par exemple, débit, latence, etc. ). Ceci nous conduit aussi à un mode heuristique. En utilisant des heuristiques prédéfinies, on optimise les préférences de l'utilisateur afin d'améliorer le processus de sélection. L'évaluation de notre approche montre que le choix du "meilleur" protocole est automatisé et proche de la réalité de la même façon que dans le mode statique. En mode dynamique, Adapt permet toujours d'obtenir la performance optimale des protocoles disponibles. L'évaluation démontre, en plus, que la performance globale du système peut être améliorée de manière significative. Explorer d'autres cas qui ne conduisent pas de basculer entre les protocoles. Ceci est rendu possible grâce à la réalisation des prévisions d'une grande precision qui peuvent atteindre plus de 98% dans de nombreux cas. La thèse montre que cette adaptabilité est rendue possible grâce à l'utilisation des heuristiques dans un mode dynamique
Byzantine Fault Tolerance (BFT) is becoming crucial with the revolution of online applications and due to the increasing number of innovations in computer technologies. Although dozens of BFT protocols have been introduced in the previous decade, their adoption by practitioners sounds disappointing. To some extant, this indicates that existing protocols are, perhaps, not yet too convincing or satisfactory. The problem is that researchers are still trying to establish 'the best protocol' using traditional methods, e. G. , through designing new protocols. However, theoretical and experimental analyses demonstrate that it is hard to achieve one-size-fits-all BFT protocols. Indeed, we believe that looking for smarter tac-tics like 'fasten fragile sticks with a rope to achieve a solid stick' is necessary to circumvent the issue. In this thesis, we introduce the first BFT selection model and algorithm that automate and simplify the election process of the 'preferred' BFT protocol among a set of candidate ones. The selection mechanism operates in three modes: Static, Dynamic, and Heuristic. For the two latter modes, we present a novel BFT system, called Adapt, that reacts to any potential changes in the system conditions and switches dynamically between existing BFT protocols, i. E. , seeking adaptation. The Static mode allows BFT users to choose a single BFT protocol only once. This is quite useful in Web Services and Clouds where BFT can be sold as a service (and signed in the SLA contract). This mode is basically designed for systems that do not have too fuctuating states. In this mode, an evaluation process is in charge of matching the user preferences against the profiles of the nominated BFT protocols considering both: reliability, and performance. The elected protocol is the one that achieves the highest evaluation score. The mechanism is well automated via mathematical matrices, and produces selections that are reasonable and close to reality. Some systems, however, may experience fluttering conditions, like variable contention or message payloads. In this case, the static mode will not be e?cient since a chosen protocol might not fit the new conditions. The Dynamic mode solves this issue. Adapt combines a collection of BFT protocols and switches between them, thus, adapting to the changes of the underlying system state. Consequently, the 'preferred' protocol is always polled for each system state. This yields an optimal quality of service, i. E. , reliability and performance. Adapt monitors the system state through its Event System, and uses a Support Vector Regression method to conduct run time predictions for the performance of the protocols (e. G. , throughput, latency, etc). Adapt also operates in a Heuristic mode. Using predefined heuristics, this mode optimizes user preferences to improve the selection process. The evaluation of our approach shows that selecting the 'preferred' protocol is automated and close to reality in the static mode. In the Dynamic mode, Adapt always achieves the optimal performance among available protocols. The evaluation demonstrates that the overall system performance can be improved significantly too. Other cases explore that it is not always worthy to switch between protocols. This is made possible through conducting predictions with high accuracy, that can reach more than 98% in many cases. Finally, the thesis shows that Adapt can be smarter through using heursitics
Styles APA, Harvard, Vancouver, ISO, etc.
42

Lagarde, Matthieu, Philippe Gaussier et Pierre Andry. « Apprentissage de nouveaux comportements : vers le développement épigénétique d'un robot autonome ». Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00749761.

Texte intégral
Résumé :
La problématique de l'apprentissage de comportements sur un robot autonome soulève de nombreuses questions liées au contrôle moteur, à l'encodage du comportement, aux stratégies comportementales et à la sélection de l'action. Utiliser une approche développementale présente un intérêt tout particulier dans le cadre de la robotique autonome. Le comportement du robot repose sur des mécanismes de bas niveau dont les interactions permettent de faire émerger des comportements plus complexes. Le robot ne possède pas d'informations a priori sur ses caractéristiques physiques ou sur l'environnement, il doit apprendre sa propre dynamique sensori-motrice. J'ai débuté ma thèse par l'étude d'un modèle d'imitation bas niveau. Du point de vue du développement, l'imitation est présente dès la naissance et accompagne, sous de multiples formes, le développement du jeune enfant. Elle présente une fonction d'apprentissage et se révèle alors être un atout en terme de temps d'acquisition de comportements, ainsi qu'une fonction de communication participant à l'amorce et au maintien d'interactions non verbales et naturelles. De plus, même s'il n'y a pas de réelle intention d'imiter, l'observation d'un autre agent permet d'extraire suffisamment d'informations pour être capable de reproduire la tâche. Mon travail a donc dans un premier temps consisté à appliquer et tester un modèle développemental qui permet l'émergence de comportements d'imitation de bas niveau sur un robot autonome. Ce modèle est construit comme un homéostat qui tend à équilibrer par l'action ses informations perceptives frustres (détection du mouvement, détection de couleur, informations sur les angles des articulations d'un bras de robot). Ainsi, lorsqu'un humain bouge sa main dans le champ visuel du robot, l'ambigüité de la perception de ce dernier lui fait confondre la main de l'humain avec l'extrémité de son bras. De l'erreur qui en résulte émerge un comportement d'imitation immédiate des gestes de l'humain par action de l'homéostat. Bien sûr, un tel modèle implique que le robot soit capable d'associer au préalable les positions visuelles de son effecteur avec les informations proprioceptives de ses moteurs. Grace au comportement d'imitation, le robot réalise des mouvements qu'il peut ensuite apprendre pour construire des comportements plus complexes. Comment alors passer d'un simple mouvement à un geste plus complexe pouvant impliquer un objet ou un lieu ? Je propose une architecture qui permet à un robot d'apprendre un comportement sous forme de séquences temporelles complexes (avec répétition d'éléments) de mouvements. Deux modèles différents permettant l'apprentissage de séquences ont été développés et testés. Le premier apprend en ligne le timing de séquences temporelles simples. Ce modèle ne permettant pas d'apprendre des séquences complexes, le second modèle testé repose sur les propriétés d'un réservoir de dynamiques, il apprend en ligne des séquences complexes. A l'issue de ces travaux, une architecture apprenant le timing d'une séquence complexe a été proposée. Les tests en simulation et sur robot ont montré la nécessité d'ajouter un mécanisme de resynchronisation permettant de retrouver les bons états cachés pour permettre d'amorcer une séquence complexe par un état intermédiaire. Dans un troisième temps, mes travaux ont consisté à étudier comment deux stratégies sensorimotrices peuvent cohabiter dans le cadre d'une tâche de navigation. La première stratégie encode le comportement à partir d'informations spatiales alors que la seconde utilise des informations temporelles. Les deux architectures ont été testées indépendamment sur une même tâche. Ces deux stratégies ont ensuite été fusionnées et exécutées en parallèle. La fusion des réponses délivrées par les deux stratégies a été réalisée avec l'utilisation de champs de neurones dynamiques. Un mécanisme de "chunking" représentant l'état instantané du robot (le lieu courant avec l'action courante) permet de resynchroniser les dynamiques des séquences temporelles. En parallèle, un certain nombre de problème de programmation et de conception des réseaux de neurones sont apparus. En effet, nos réseaux peuvent compter plusieurs centaines de milliers de neurones. Il devient alors difficile de les exécuter sur une seule unité de calcul. Comment concevoir des architectures neuronales avec des contraintes de répartition de calcul, de communications réseau et de temps réel ? Une autre partie de mon travail a consisté à apporter des outils permettant la modélisation, la communication et l'exécution en temps réel d'architecture distribuées. Pour finir, dans le cadre du projet européen Feelix Growing, j'ai également participé à l'intégration de mes travaux avec ceux du laboratoire LASA de l'EPFL pour l'apprentissage de comportements complexes mêlant la navigation, le geste et l'objet. En conclusion, cette thèse m'a permis de développer à la fois de nouveaux modèles pour l'apprentissage de comportements - dans le temps et dans l'espace, de nouveaux outils pour maîtriser des réseaux de neurones de très grande taille et de discuter à travers les limitations du système actuel, les éléments importants pour un système de sélection de l'action.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Mendonça, Rafael Mathias de. « Algoritmos distribuídos para alocação dinâmica de tarefas em enxame de robôs ». Universidade do Estado do Rio de Janeiro, 2014. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=8140.

Texte intégral
Résumé :
A Inteligência de Enxame foi proposta a partir da observação do comportamento social de espécies de insetos, pássaros e peixes. A ideia central deste comportamento coletivo é executar uma tarefa complexa decompondo-a em tarefas simples, que são facilmente executadas pelos indivíduos do enxame. A realização coordenada destas tarefas simples, respeitando uma proporção pré-definida de execução, permite a realização da tarefa complexa. O problema de alocação de tarefas surge da necessidade de alocar as tarefas aos indivíduos de modo coordenado, permitindo o gerenciamento do enxame. A alocação de tarefas é um processo dinâmico pois precisa ser continuamente ajustado em resposta a alterações no ambiente, na configuração do enxame e/ou no desempenho do mesmo. A robótica de enxame surge deste contexto de cooperação coletiva, ampliada à robôs reais. Nesta abordagem, problemas complexos são resolvidos pela realização de tarefas complexas por enxames de robôs simples, com capacidade de processamento e comunicação limitada. Objetivando obter flexibilidade e confiabilidade, a alocação deve emergir como resultado de um processo distribuído. Com a descentralização do problema e o aumento do número de robôs no enxame, o processo de alocação adquire uma elevada complexidade. Desta forma, o problema de alocação de tarefas pode ser caracterizado como um processo de otimização que aloca as tarefas aos robôs, de modo que a proporção desejada seja atendida no momento em que o processo de otimização encontre a solução desejada. Nesta dissertação, são propostos dois algoritmos que seguem abordagens distintas ao problema de alocação dinâmica de tarefas, sendo uma local e a outra global. O algoritmo para alocação dinâmica de tarefas com abordagem local (ADTL) atualiza a alocação de tarefa de cada robô a partir de uma avaliação determinística do conhecimento atual que este possui sobre as tarefas alocadas aos demais robôs do enxame. O algoritmo para alocação dinâmica de tarefas com abordagem global (ADTG) atualiza a alocação de tarefas do enxame com base no algoritmo de otimização PSO (Particle swarm optimization). No ADTG, cada robô possui uma possível solução para a alocação do enxame que é continuamente atualizada através da troca de informação entre os robôs. As alocações são avaliadas quanto a sua aptidão em atender à proporção-objetivo. Quando é identificada a alocação de maior aptidão no enxame, todos os robôs do enxame são alocados para as tarefas definidas por esta alocação. Os algoritmos propostos foram implementados em enxames com diferentes arranjos de robôs reais demonstrando sua eficiência e eficácia, atestados pelos resultados obtidos.
Swarm Intelligence has been proposed based on the observation of social behavior of insect species, birds and fishes. The main idea of this collective behavior is to perform a complex task decomposing it into many simple tasks, that can be easily performed by individuals of the swarm. Coordinated realization of these simple tasks while adhering to a pre-defined distribution of execution, allows for the achievement of the original complex task. The problem of task allocation arises from the need of assigning tasks to individuals in a coordinated fashion, allowing a good management of the swarm. Task allocation is a dynamic process because it requires a continuous adjustment in response to changes in the environment, the swarm configuration and/or the performance of the swarm. Swarm robotics emerges from this context of collective cooperation applied to swarms of real robots. In this approach, complex problems are solved by performing complex tasks using swarms of simple robots, with a limited processing and communication capabilities. Aiming at achieving flexibility and reliability, the allocation should emerge as a result of a distributed process. With the decentralization of the problem and the increasing number of robots in the swarm, the allocation process acquires a high complexity. Thus, the problem of task allocation can be characterized as an optimization process that assigns tasks to robots, so that the desired proportion is met at the end of the optimization process, find the desired solution. In this dissertation, we propose two algorithms that follow different to the problem of dynamic task allocation approaches: one is local and the other global. The algorithm for dynamic allocation of tasks with a local approach (ADTL) updates the task assignment of each robot based on a deterministic assessment of the current knowledge it has so far about the tasks allocated to the other robots of the swarm. The algorithm for dynamic task allocation with a global approach (ADTG) updates the allocation of tasks based on a swarm optimization process, inspired by PSO (Particle swarm optimization). In ADTG, each robot has a possible solution to the swarm allocation, which is continuously updated through the exchange of information between the robots. The allocations are evaluated for their fitness in meeting the goal proportion. When the allocation of highest fitness in the swarm is identified, all robots of the swarm are allocated to the tasks defined by this allocation. The proposed algorithms were implemented on swarms of different arrangements of real robots demonstrating their efficacy, robustness and efficiency, certified by obtained the results.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Mastio, Matthieu. « Modèles de distribution pour la simulation de trafic multi-agent ». Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1147/document.

Texte intégral
Résumé :
L'analyse et la prévision du comportement des réseaux de transport sont aujourd'hui des éléments cruciaux pour la mise en place de politiques de gestion territoriale. La simulation informatique du trafic routier est un outil puissant permettant de tester des stratégies de gestion avant de les déployer dans un contexte opérationnel. La simulation du trafic à l'échelle d'un ville requiert cependant une puissance de calcul très importante, dépassant les capacité d'un seul ordinateur.Dans cette thèse, nous étudions des méthodes permettant d'effectuer des simulations de trafic multi-agent à large échelle. Nous proposons des solutions permettant de distribuer l'exécution de telles simulations sur un grand nombre de coe urs de calcul. L'une d'elle distribue directement les agents sur les coeurs disponibles, tandis que la seconde découpe l'environnement sur lequel les agents évoluent. Les méthodes de partitionnement de graphes sont étudiées à cet effet, et nous proposons une procédure de partitionnement spécialement adaptée à la simulation de trafic multi-agent. Un algorithme d'équilibrage de charge dynamique est également développé, afin d'optimiser les performances de la distribution de la simulation microscopique.Les solutions proposées ont été éprouvées sur un réseau réel représentant la zone de Paris-Saclay.Ces solutions sont génériques et peuvent être appliquées sur la plupart des simulateurs existants.Les résultats montrent que la distribution des agents améliore grandement les performances de la simulation macroscopique, tandis que le découpage de l'environnement est plus adapté à la simulation microscopique. Notre algorithme d'équilibrage de charge améliore en outre significativement l'efficacité de la distribution de l'environnement
Nowadays, analysis and prediction of transport network behavior are crucial elements for the implementation of territorial management policies. Computer simulation of road traffic is a powerful tool for testing management strategies before deploying them in an operational context. Simulation of city-wide traffic requires significant computing power exceeding the capacity of a single computer.This thesis studies the methods to perform large-scale multi-agent traffic simulations. We propose solutions allowing the distribution of such simulations on a large amount of computing cores.One of them distributes the agents directly on the available cores, while the second splits the environment on which the agents evolve. Graph partitioning methods are studied for this purpose, and we propose a partitioning procedure specially adapted to the multi-agent traffic simulation. A dynamic load balancing algorithm is also developed to optimize the performance of the microscopic simulation distribution.The proposed solutions have been tested on a real network representing the Paris-Saclay area.These solutions are generic and can be applied to most existing simulators.The results show that the distribution of the agents greatly improves the performance of the macroscopic simulation, whereas the environment distribution is more suited to microscopic simulation. Our load balancing algorithm also significantly improves the efficiency of the environment based distribution
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ruiz, Anthony. « Simulations Numériques Instationnaires de la Combustion Turbulente et Transcritique dans les Moteurs Cryotechniques ». Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00691975.

Texte intégral
Résumé :
Ces 50 dernières années, la majorité des paramètres de conception des moteurs cryotechniques ont été ajustés en l'absence d'une compréhension détaillée de la dynamique de flamme, en raison des limites des diagnostiques expérimentaux et des capacités de calcul. L'objectif de cette thèse est de réaliser des simulations numériques instationnaires d'écoulements réactifs transcritiques de haute fidélité, pour permettre une meilleure compréhension de la dynamique de flamme dans les moteurs cryotechniques et finalement guider leur amélioration. Dans un premier temps, la thermodynamique gaz-réel et son impact sur les schémas numériques sont présentés. Comme la Simulation aux Grandes Echelles (SGE) comporte des équations filtrées, les effets de filtrages induits par la thermodynamique gaz-réel sont ensuite mis en évidence dans une configuration transcritique type et un opérateur de diffusion artificiel, spécifique au gaz réel, est proposé pour lisser les gradients transcritiques en SGE. Dans un deuxième temps, une étude fondamentale du mélange turbulent et de la combustion dans la zone proche-injecteur des moteurs cryotechniques est menée grâce à la Simulation Numérique Directe (SND). Dans le cas non-réactif, les lâchers tourbillonnaires dans le sillage de la lèvre de l'injecteur jouent un rôle majeur dans le mélange turbulent et provoquent la formation de structures en peigne déjà observées expérimentalement dans des conditions similaires. Dans le cas réactif, la flamme reste attachée à la lèvre de l'injecteur, sans extinction locale, et les structures en peigne disparaissent. La structure de flamme est analysée et différents modes de combustion sont identifiés. Enfin, une étude de flamme-jet transcritique H2/O2, accrochée à un injecteur coaxial avec et sans retrait interne, est menée. Les résultats numériques sont d'abord validés par des données expérimentales pour l'injecteur sans retrait. Ensuite, la configuration avec retrait est comparée à la solution de référence sans retrait et à des données experimentales pour observer les effets de ce paramètre de conception sur l'efficacité de combustion.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Jimborean, Alexandra. « Adapting the polytope model for dynamic and speculative parallelization ». Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00733850.

Texte intégral
Résumé :
In this thesis, we present a Thread-Level Speculation (TLS) framework whose main feature is to speculatively parallelize a sequential loop nest in various ways, to maximize performance. We perform code transformations by applying the polyhedral model that we adapted for speculative and runtime code parallelization. For this purpose, we designed a parallel code pattern which is patched by our runtime system according to the profiling information collected on some execution samples. We show on several benchmarks that our framework yields good performance on codes which could not be handled efficiently by previously proposed TLS systems.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Lachat, Cédric. « Conception et validation d'algorithmes de remaillage parallèles à mémoire distribuée basés sur un remailleur séquentiel ». Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00932602.

Texte intégral
Résumé :
L'objectif de cette thèse était de proposer, puis de valider expérimentalement, un ensemble de méthodes algorithmiques permettant le remaillage parallèle de maillages distribués, en s'appuyant sur une méthode séquentielle de remaillage préexistante. Cet objectif a été atteint par étapes : définition de structures de données et de schémas de communication adaptés aux maillages distribués, permettant le déplacement à moindre coût des interfaces entre sous-domaines sur les processeurs d'une architecture à mémoire distribuée ; utilisation d'algorithmes de répartition dynamique de la charge adaptés aux techniques parallèles de remaillage ; conception d'algorithmes parallèles permettant de scinder le problème global de remaillage parallèle en plusieurs sous-tâches séquentielles, susceptibles de s'exécuter concurremment sur les processeurs de la machine parallèle. Ces contributions ont été mises en oeuvre au sein de la bibliothèque parallèle PaMPA, en s'appuyant sur les briques logicielles MMG3D (remaillage séquentiel de maillages tétraédriques) et PT-Scotch (repartitionnement parallèle de graphes). La bibliothèque PaMPA offre ainsi les fonctionnalités suivantes : communication transparente entre processeurs voisins des valeurs portées par les noeuds, les éléments, etc. ;remaillage, selon des critères fournis par l'utilisateur, de portions du maillage distribué, en offrant une qualité constante, que les éléments à remailler soient portés par un unique processeur ou bien répartis sur plusieurs d'entre eux ; répartition et redistribution de la charge des maillages pour préserver l'efficacité des simulations après remaillage.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Taton, Christophe. « Vers l’auto-optimisation dans les systèmes autonomes ». Grenoble INPG, 2008. http://www.theses.fr/2008INPG0183.

Texte intégral
Résumé :
La complexité croissante des systèmes informatiques rend l'administration des systèmes de plus en plus fastidieuse. Une approche à ce problème vise à construire des systèmes autonomes capables de prendre en charge eux-mêmes leur administration et de réagir aux changements de leur état et de leur environnement. Dans le contexte actuel de l'énergie rare et chère, l'optimisation des systèmes informatiques est un domaine d'administration fondamental pour améliorer leurs performances et réduire leur empreinte énergétique. Gros consommateurs d'énergie, les systèmes actuels sont statiquement configurés et réagissent assez mal aux évolutions de leur environnement, et notamment aux variations des charges de travail auxquelles ils sont soumis. L'auto-optimisation offre une réponse prometteuse à ces différents besoins en dotant les systèmes de la faculté d'améliorer leurs performances de manière autonome. Cette thèse se consacre à l'étude des algorithmes et mécanismes permettant de mettre en œuvre des systèmes autonomes auto-optimisés. Nous étudions plus particulièrement les algorithmes d'auto-optimisation fondés sur l'approvisionnement dynamique des systèmes afin d'en améliorer les performances et de maximiser le rendement des ressources. Dans le cadre du prototype Jade, plate-forme d'administration autonome à base de composants, nous proposons des algorithmes qui améliorent au mieux les performances des systèmes administrés par des adaptations des systèmes en réponse à des variations progressives ou brutales des charges auxquelles ils sont soumis. Nous montrons l'efficacité de ces algorithmes sur des services Internet et des services à messages soumis à des charges variables. Enfin, dans le but de garantir des performances optimales, nous proposons également une politique d'optimisation qui repose sur une modélisation des systèmes administrés servant à l'élaboration de configurations optimales. Cette politique fait l'objet d'une évaluation sur un service de surveillance d'une infrastructure distribuée. L'implantation de politiques d'administration autonome fait apparaître un certain nombre de défis en induisant diverses contraintes : le système doit être capable d'adaptation dynamique, de s'observer et de se manipuler. En réponse à ces besoins, nous nous appuyons sur le langage Oz et sa plateforme distribuée Mozart pour implanter FructOz, un canevas spécialisé dans la construction et la manipulation de systèmes à architectures distribuées dynamiques complexes, et LactOz, une bibliothèque d'interrogation des architectures dynamiques. En combinant FructOz et LactOz, on montre comment implanter des systèmes dynamiques complexes impliquant des déploiements distribués avec un haut niveau de paramétrage et de synchronisation
The increasing complexity of computer systems makes their administration even more tedious and error-prone. A general approach to this problem consists in building autonomic systems that are able to manage themselves and to handle changes of their state and their environment. While energy becomes even more scarce and expensive, the optimization of computer systems is an essential management field to improve their performance and to reduce their energetic footprint. As huge energy consumers, current computer systems are usually statically configured and behave badly in response to changes of their environment, and especially to changes of their workload. Self-optimization appears as a promising approach to these problems as it endows these systems with the ability to improve their own performance in an autonomous manner. This thesis focuses on algorithms and techniques to implement self-optimized autonomic systems. We specifically study self-optimization algorithms that rely on dynamic system provisioning in order to improve their performance and their resources’ efficiency. In the context of the Jade prototype of a component-based autonomic management platform, we propose best-effort algorithms that improve the performance of the managed systems through dynamic adaptations of the systems in response to gradual or sudden changes of their workload. We show the efficiency of these algorithms on Internet services and on messages services submitted to changing workloads. Finally, in order to guarantee optimal performance, we propose an optimization policy relying on the modelling of the managed system so as to generate optimal configurations. This policy is evaluated on a monitoring service for distributed systems. The implementation of autonomic management policies raised a number of challenges: the system is required to support dynamic adaptions, to observe itself and to take actions on itself. We address these needs with the Oz programming language and its distributed platform Mozart to implement the FructOz framework dedicated to the construction and handling of complex dynamic and distributed architecture-based systems, and the LactOz library specialized in the querying and browsing of dynamic architectures. Combining FructOz and LactOz, we show how to build complex dynamic systems involving distributed deployments as well as high levels of synchronizations and parameters
Styles APA, Harvard, Vancouver, ISO, etc.
49

Guyeux, Christophe. « Désordre des itérations chaotiques et leur utilité en sécurité informatique ». Besançon, 2010. http://www.theses.fr/2010BESA2019.

Texte intégral
Résumé :
Les itérations chaotiques, un outil issu des mathématiques discrètes, sont pour la première fois étudiées pour obtenir de la divergence et du désordre. Après avoir utilisé les mathématiques discrètes pour en déduire des situations de non convergence, ces itérations sont modélisées sous la forme d’un système dynamique et sont étudiées topologiquement dans le cadre de la théorie mathématique du chaos. Nous prouvons que leur adjectif « chaotique » a été bien choisi : ces itérations sont du chaos aux sens de Devaney, Li-Yorke, l’expansivité, l’entropie topologique et l’exposant de Lyapunov, etc. Ces propriétés ayant été établies pour une topologie autre que la topologie de l’ordre, les conséquences de ce choix sont discutées. Nous montrons alors que ces itérations chaotiques peuvent être portées telles quelles sur ordinateur, sans perte de propriétés, et qu’il est possible de contourner le problème de la finitude des ordinateurs pour obtenir des programmes aux comportements prouvés chaotiques selon Devaney, etc. Cette manière de faire est respectée pour générer un algorithme de tatouage numérique et une fonction de hachage chaotiques au sens le plus fort qui soit. À chaque fois, l’intérêt d’être dans le cadre de la théorie mathématique du chaos est justifié, les propriétés à respecter sont choisies suivant les objectifs visés, et l’objet ainsi construit est évalué. Une notion de sécurité pour la stéganographie est introduite, pour combler l’absence d’outil permettant d’estimer la résistance d’un schéma de dissimulation d’information face à certaines catégories d’attaques. Enfin, deux solutions au problème de l’agrégation sécurisée des données dans les réseaux de capteurs sans fil sont proposées
For the first time, the divergence and disorder properties of “chaotic iterations”, a tool taken from the discrete mathematics domain, are studied. After having used discrete mathematics to deduce situations of non-convergence, these iterations are modeled as a dynamical system and are topologically studied into the framework of the mathematical theory of chaos. We prove that their adjective “chaotic” is well chosen : these iterations are chaotic, according to the definitions of Devaney, Li-Yorke, expansivity, topological entropy, Lyapunov exponent, and so on. These properties have been established for a topology different from the order topology, thus the consequences of this choice are discussed. We show that these chaotic iterations can be computed without any loss of properties, and that it is possible to circumvent the problem of the finiteness of computers to obtain programs that are proven to be chaotic according to Devaney, etc. The procedure proposed in this document is followed to generate a digital watermarking algorithm and a hash function, which are chaotic according to the strongest possible sense. At each time, the advantages of being chaotic as defined in the mathematical theory of chaos is justified, the properties to check are chosen depending on the objectives to reach, and the programs are evaluated. A novel notion of security for steganography is introduced, to address the lack of tool for estimating the strength of an information hiding scheme against certain types of attacks. Finally, two solutions to the problem of secure data aggregation in wireless sensor networks are proposed
Styles APA, Harvard, Vancouver, ISO, etc.
50

NATALE, EMANUELE. « On the computational power of simple dynamics ». Doctoral thesis, 2017. http://hdl.handle.net/11573/934053.

Texte intégral
Résumé :
This work presents a set of analytical results regarding some elementary randomized protocols, called dynamics, for solving some fundamental computational problems. New techniques for analyzing the processes that arise from such dynamics are presented, together with concrete examples on how to build efficient and robust distributed algorithms for some important tasks using these processes as a black-box. More specifically, we analyze several dynamics such as the 3-Majority, the Averaging and the Undecided-State ones, and we show how to use them to solve fundamental problems such as plurality consensus, community detection (including the reconstruction problem in the stochastic block model), and bit dissemination (rumor spreading). We focus mainly on unstructured and random interaction models, and we also deal with scenarios in which the communication is affected by noise or when a self-stabilizing protocol is required.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie