To see the other types of publications on this topic, follow the link: Load graph.

Dissertations / Theses on the topic 'Load graph'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Load graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Barat, Remi. "Load Balancing of Multi-physics Simulation by Multi-criteria Graph Partitioning." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0961/document.

Full text
Abstract:
Les simulations dites multi-physiques couplent plusieurs phases de calcul. Lorsqu’elles sont exécutées en parallèle sur des architectures à mémoire distribuée, la minimisation du temps de restitution nécessite dans la plupart des cas d’équilibrer la charge entre les unités de traitement, pour chaque phase de calcul. En outre, la distribution des données doit minimiser les communications qu’elle induit. Ce problème peut être modélisé comme un problème de partitionnement de graphe multi-critères. On associe à chaque sommet du graphe un vecteur de poids, dont les composantes, appelées « critères », modélisent la charge de calcul porté par le sommet pour chaque phase de calcul. Les arêtes entre les sommets, indiquent des dépendances de données, et peuvent être munies d’un poids reflétant le volume de communication transitant entre les deux sommets. L’objectif est de trouver une partition des sommets équilibrant le poids de chaque partie pour chaque critère, tout en minimisant la somme des poids des arêtes coupées, appelée « coupe ». Le déséquilibre maximum toléré entre les parties est prescrit par l’utilisateur. On cherche alors une partition minimisant la coupe, parmi toutes celles dont le déséquilibre pour chaque critère est inférieur à cette tolérance. Ce problème étant NP-Dur dans le cas général, l’objet de cette thèse est de concevoir et d’implanter des heuristiques permettant de calculer efficacement de tels partitionnements. En effet, les outils actuels renvoient souvent des partitions dont le déséquilibre dépasse la tolérance prescrite. Notre étude de l’espace des solutions, c’est-à-dire l’ensemble des partitions respectant les contraintes d’équilibre, révèle qu’en pratique, cet espace est immense. En outre, nous prouvons dans le cas mono-critère qu’une borne sur les poids normalisés des sommets garantit que l’espace des solutions est non-vide et connexe. Nous fondant sur ces résultats théoriques, nous proposons des améliorations de la méthode multi-niveaux. Les outils existants mettent en oeuvre de nombreuses variations de cette méthode. Par l’étude de leurs codes sources, nous mettons en évidence ces variations et leurs conséquences à la lumière de notre analyse sur l’espace des solutions. Par ailleurs, nous définissons et implantons deux algorithmes de partitionnement initial, se focalisant sur l’obtention d’une solution à partir d’une partition potentiellement déséquilibrée, au moyen de déplacements successifs de sommets. Le premier algorithme effectue un mouvement dès que celui-ci améliore l’équilibre, alors que le second effectue le mouvement réduisant le plus le déséquilibre. Nous présentons une structure de données originale, permettant d’optimiser le choix des sommets à déplacer, et conduisant à des partitions de déséquilibre inférieur en moyenne aux méthodes existantes. Nous décrivons la plate-forme d’expérimentation, appelée Crack, que nous avons conçue afin de comparer les différents algorithmes étudiés. Ces comparaisons sont effectuées en partitionnant un ensembles d’instances comprenant un cas industriel et plusieurs cas fictifs. Nous proposons une méthode de génération de cas réalistes de simulations de type « transport de particules ». Nos résultats démontrent la nécessité de restreindre les poids des sommets lors de la phase de contraction de la méthode multi-niveaux. En outre, nous mettons en évidence l’influence de la stratégie d’ordonnancement des sommets, dépendante de la topologie du graphe, sur l’efficacité de l’algorithme d’appariement « Heavy-Edge Matching » dans cette même phase. Les différents algorithmes que nous étudions sont implantés dans un outil de partitionnement libre appelé Scotch. Au cours de nos expériences, Scotch et Crack renvoient une partition équilibrée à chaque exécution, là où MeTiS, l’outil le plus utilisé actuellement, échoue une grande partie du temps. Qui plus est, la coupe des solutions renvoyées par Scotch et Crack est équivalente ou meilleure que celle renvoyée par MeTiS
Multiphysics simulation couple several computation phases. When they are run in parallel on memory-distributed architectures, minimizing the simulation time requires in most cases to balance the workload across computation units, for each computation phase. Moreover, the data distribution must minimize the induced communication. This problem can be modeled as a multi-criteria graph partitioning problem. We associate with each vertex of the graph a vector of weights, whose components, called “criteria”, model the workload of the vertex for each computation phase. The edges between vertices indicate data dependencies, and can be given a weight representing the communication volume transferred between the two vertices. The goal is to find a partition of the vertices that both balances the weights of each part for each criterion, and minimizes the “edgecut”, that is, the sum of the weights of the edges cut by the partition. The maximum allowed imbalance is provided by the user, and we search for a partition that minimizes the edgecut, among all the partitions whose imbalance for each criterion is smaller than this threshold. This problem being NP-Hard in the general case, this thesis aims at devising and implementing heuristics that allow us to compute efficiently such partitions. Indeed, existing tools often return partitions whose imbalance is higher than the prescribed tolerance. Our study of the solution space, that is, the set of all the partitions respecting the balance constraints, reveals that, in practice, this space is extremely large. Moreover, we prove in the mono-criterion case that a bound on the normalized vertex weights guarantees the existence of a solution, and the connectivity of the solution space. Based on these theoretical results, we propose improvements of the multilevel algorithm. Existing tools implement many variations of this algorithm. By studying their source code, we emphasize these variations and their consequences, in light of our analysis of the solution space. Furthermore, we define and implement two initial partitioning algorithms, focusing on returning a solution. From a potentially imbalanced partition, they successively move vertices from one part to another. The first algorithm performs any move that reduces the imbalance, while the second performs at each step the move reducing the most the imbalance. We present an original data structure that allows us to optimize the choice of the vertex to move, and leads to partitions of imbalance smaller on average than existing methods. We describe the experimentation framework, named Crack, that we implemented in order to compare the various algorithms at stake. This comparison is performed by partitioning a set of instances including an industrial test case, and several fictitious cases. We define a method for generating realistic weight distributions corresponding to “Particles-in-Cells”-like simulations. Our results demonstrate the necessity to coerce the vertex weights during the coarsening phase of the multilevel algorithm. Moreover, we evidence the impact of the vertex ordering, which should depend on the graph topology, on the efficiency of the “Heavy-Edge” matching scheme. The various algorithms that we consider are implemented in an open- source graph partitioning software called Scotch. In our experiments, Scotch and Crack returned a balanced partition for each execution, whereas MeTiS, the current most used partitioning tool, fails regularly. Additionally, the edgecut of the solutions returned by Scotch and Crack is equivalent or better than the edgecut of the solutions returned by MeTiS
APA, Harvard, Vancouver, ISO, and other styles
2

Predari, Maria. "Load balancing for parallel coupled simulations." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0369/document.

Full text
Abstract:
Dans le contexte du calcul scientique, l'équilibrage de la charge est un problème crucial qui conditionne la performance des simulations numériques parallèles. L'objectif est de répartir la charge de travail entre un nombre de processeurs donné, afin de minimiser le temps global d'exécution. Une stratégie populaire pour résoudre ce problème consiste à modéliser la simulation à l'aide d'un graphe et à appliquer des algorithmes de partitionnement. En outre, les simulations numériques tendent à se complexifier, notamment en mixant plusieurs codes représentant des physiques différentes ou des échelles différentes. On parle alors de couplage de codes multi-physiques ou multi-échelles. Dans ce contexte, le problème de l'équilibrage de charge devient également plus difficile, car il ne s'agit plus d'équilibrer chacun des codes séparément, mais l'ensemble de ces codes pris dans leur globalité. Dans ce travail, on propose de resoudre ce problème en utilisant le modèle de partitionnement à sommets fixes qui pourrait représenter efficacement les contraintes supplémentaires imposées par les codes couplés (co-partitionnement). Nous avons donc développé un algorithme direct de partitionnement de graphe qui gère des sommets fixes. L'algorithme a été implémenté dans le partitionneur Scotch et une série d'expériences ont été menées sur la collection des graphes DIMACS. Ensuite nous avons proposé trois algorithmes de co-partitionnement qui respectent les contraintes issues des codes couplés respectifs. Nous avons egalement validé nos algorithmes par une étude expérimentale en comparant nos méthodes aux strategies actuelles sur des cas artificiels ainsi que sur des codes réels couplés
Load balancing is an important step conditioning the performance of parallel applications. The goal is to distribute roughly equal amounts of computational load across a number of processors, while minimising interprocessor communication. A common approach to model the problem is based on graph structures and graph partitioning algorithms. Moreover, new challenges involve the simulation of more complex physical phenomena, where different parts of the computational domain exhibit different physical behavior. Such simulations follow the paradigm of multi-physics or multi-scale modeling approaches. Combining such different models in massively parallel computations is still a challenge to reach high performance. Additionally, traditional load balancing algorithms are often inadequate, and more sophisticated solutions should be explored. In this thesis, we propose new graph partitioning algorithms that balance the load of such simulations, refered to as co-partitioning. We formulate this problem with the use of graph partitioning with initially fixed vertices which we believe represents efficiently the additional constraints of coupled simulations. We have therefore developed a direct algorithm for graph partitioning that manages successfully problems with fixed vertices. The algorithm is implemented inside Scotch partitioner and a series of experiments were carried out on the DIMACS graph collection. Moreover we proposed three copartitioning algorithms that respect the constraints of the respective coupled codes. We finally validated our algorithms by an experimental study comparing our methods with current strategies on artificial cases and on real-life coupled simulations
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Jiawen. "The GraphGrind framework : fast graph analytics on large shared-memory systems." Thesis, Queen's University Belfast, 2018. https://pure.qub.ac.uk/portal/en/theses/the-graphgrind-framework-fast-graph-analytics-on-large-sharedmemory-systems(e1eb006f-3a68-4d05-91fe-961d04b42694).html.

Full text
Abstract:
As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs. This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework. At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets. Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets. However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer. Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.
APA, Harvard, Vancouver, ISO, and other styles
4

Deveci, Mehmet. "Load-Balancing and Task Mapping for Exascale Systems." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429199721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Botadra, Harnish. "iC2mpi a platform for parallel execution of graph-structured iterative computations /." unrestricted, 2006. http://etd.gsu.edu/theses/available/etd-07252006-165725/.

Full text
Abstract:
Thesis (M.S.)--Georgia State University, 2006.
Title from title screen. Sushil Prasad, committee chair. Electronic text (106 p. : charts) : digital, PDF file. Description based on contents viewed June 11, 2007. Includes bibliographical references. Includes bibliographical references (p. 61-53).
APA, Harvard, Vancouver, ISO, and other styles
6

Yildiz, Ali. "Resource-aware Load Balancing System With Artificial Neural Networks." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607613/index.pdf.

Full text
Abstract:
As the distributed systems becomes popular, efficient load balancing systems taking better decisions must be designed. The most important reasons that necessitate load balancing in a distributed system are the heterogeneous hosts having different com- puting powers, external loads and the tasks running on different hosts but communi- cating with each other. In this thesis, a load balancing approach, called RALBANN, developed using graph partitioning and artificial neural networks (ANNs) is de- scribed. The aim of RALBANN is to integrate the successful load balancing deci- sions of graph partitioning algorithms with the efficient decision making mechanism of ANNs. The results showed that using ANNs to make efficient load balancing can be very beneficial. If trained enough, ANNs may load the balance as good as graph partitioning algorithms more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
7

Zheng, Chunfang. "GRAPHICAL MODELING AND SIMULATION OF A HYBRID HETEROGENEOUS AND DYNAMIC SINGLE-CHIP MULTIPROCESSOR ARCHITECTURE." UKnowledge, 2004. http://uknowledge.uky.edu/gradschool_theses/249.

Full text
Abstract:
A single-chip, hybrid, heterogeneous, and dynamic shared memory multiprocessor architecture is being developed which may be used for real-time and non-real-time applications. This architecture can execute any application described by a dataflow (process flow) graph of any topology; it can also dynamically reconfigure its structure at the node and processor architecture levels and reallocate its resources to maximize performance and to increase reliability and fault tolerance. Dynamic change in the architecture is triggered by changes in parameters such as application input data rates, process execution times, and process request rates. The architecture is a Hybrid Data/Command Driven Architecture (HDCA). It operates as a dataflow architecture, but at the process level rather than the instruction level. This thesis focuses on the development, testing and evaluation of a new graphic software (hdca) developed to first do a static resource allocation for the architecture to meet timing requirements of an application and then hdca simulates the architecture executing the application using statically assigned resources and parameters. While simulating the architecture executing an application, the software graphically and dynamically displays parameters and mechanisms important to the architectures operation and performance. The new graphical software is able to show system and node level dynamic capability of the HDCA. The newly developed software can model a fixed or varying input data rate. The model also allows fault tolerance analysis of the architecture.
APA, Harvard, Vancouver, ISO, and other styles
8

Gillet, Noel. "Optimisation de requêtes sur des données massives dans un environnement distribué." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0553/document.

Full text
Abstract:
Les systèmes de stockage distribués sont massivement utilisés dans le contexte actuel des grandes masses de données. En plus de gérer le stockage de ces données, ces systèmes doivent répondre à une quantité toujours plus importante de requêtes émises par des clients distants afin d’effectuer de la fouille de données ou encore de la visualisation. Une problématique majeure dans ce contexte consiste à répartir efficacement les requêtes entre les différents noeuds qui composent ces systèmes afin de minimiser le temps de traitement des requêtes ( temps maximum et en moyenne d’une requête, temps total de traitement pour toutes les requêtes...). Dans cette thèse nous nous intéressons au problème d’allocation de requêtes dans un environnement distribué. On considère que les données sont répliquées et que les requêtes sont traitées par les noeuds stockant une copie de la donnée concernée. Dans un premier temps, des solutions algorithmiques quasi-optimales sont proposées lorsque les communications entre les différents noeuds du système se font de manière asynchrone. Le cas où certains noeuds du système peuvent être en panne est également considéré. Dans un deuxième temps, nous nous intéressons à l’impact de la réplication des données sur le traitement des requêtes. En particulier, un algorithme qui adapte la réplication des données en fonction de la demande est proposé. Cet algorithme couplé à nos algorithmes d’allocation permet de garantir une répartition des requêtes proche de l’idéal pour toute distribution de requêtes. Enfin, nous nous intéressons à l’impact de la réplication quand les requêtes arrivent en flux sur le système. Nous procédons à une évaluation expérimentale sur la base de données distribuées Apache Cassandra. Les expériences réalisées confirment l’intérêt de la réplication et de nos algorithmes d’allocation vis-à-vis des solutions présentes par défaut dans ce système
Distributed data store are massively used in the actual context of Big Data. In addition to provide data management features, those systems have to deal with an increasing amount of queries sent by distant users in order to process data mining or data visualization operations. One of the main challenge is to evenly distribute the workload of queries between the nodes which compose these system in order to minimize the treatment time. In this thesis, we tackle the problem of query allocation in a distributed environment. We consider that data are replicated and a query can be handle only by a node storing the concerning data. First, near-optimal algorithmic proposals are given when communications between nodes are asynchronous. We also consider that some nodes can be faulty. Second, we study more deeply the impact of data replication on the query treatement. Particularly, we present an algorithm which manage the data replication based on the demand on these data. Combined with our allocation algorithm, we guaranty a near-optimal allocation. Finally, we focus on the impact of data replication when queries are received as a stream by the system. We make an experimental evaluation using the distributed database Apache Cassandra. The experiments confirm the interest of our algorithmic proposals to improve the query treatement compared to the native allocation scheme in Cassandra
APA, Harvard, Vancouver, ISO, and other styles
9

Tbaileh, Ahmad Anan. "Robust Non-Matrix Based Power Flow Algorithm for Solving Integrated Transmission and Distribution Systems." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/89362.

Full text
Abstract:
This work presents an alternative approach to power system computations, Graph Trace Analysis (GTA), and applies GTA to the power flow problem. A novel power flow algorithm is presented, where GTA traces are used to implement a modified Gauss-Seidel algorithm coupled with a continuation method. GTA is derived from the Generic Programming Paradigm of computer science. It uses topology iterators to move through components in a model and perform calculations. Two advantages that GTA brings are the separation of system equations from component equations and the ability to distribute calculations across processors. The implementation of KVL and KCL in GTA is described. The GTA based power flow algorithm is shown to solve IEEE standard transmission models, IEEE standard distribution models, and integrated transmission and distribution models (hybrid models) constructed from modifying IEEE standard models. The GTA power flow is shown to solve a set of robustness testing circuits, and solutions are compared with other power flow algorithms. This comparison illustrates convergence characteristics of different power flow algorithms in the presence of voltage stability concerns. It is also demonstrated that the GTA power flow solves integrated transmission and distribution system models. Advantages that GTA power flow bring are the ability to solve realistic, complex circuit models that pose problems to many traditional algorithms; the ability to solve circuits that are operating far from nominal conditions; and the ability to solve transmission and distribution networks together in the same model.
PHD
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, Danling. "Integrated System Model Reliability Evaluation and Prediction for Electrical Power Systems: Graph Trace Analysis Based Solutions." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28944.

Full text
Abstract:
A new approach to the evaluation of the reliability of electrical systems is presented. In this approach a Graph Trace Analysis based approach is applied to integrated system models and reliability analysis. The analysis zones are extended from the traditional power system functional zones. The systems are modeled using containers with iterators, where the iterators manage graph edges and are used to process through the topology of the graph. The analysis provides a means of computationally handling dependent outages and cascading failures. The effects of adverse weather, time-varying loads, equipment age, installation environment, operation conditions are considered. Sequential Monte Carlo simulation is used to evaluate the reliability changes for different system configurations, including distributed generation and transmission lines. Historical weather records and loading are used to update the component failure rates on-the-fly. Simulation results are compared against historical reliability field measurements. Given a large and complex plant to operate, a real-time understanding of the networks and their situational reliability is important to operational decision support. This dissertation also introduces using an Integrated System Model in helping operators to minimize real-time problems. A real-time simulation architecture is described, which predicts where problems may occur, how serious they may be, and what is the possible root cause.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Silva, Rafael Ennes. "Escalonamento estático de programas-MPI." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/11472.

Full text
Abstract:
O bom desempenho de uma aplicação paralela é obtido conforme o modo como as técnicas de paralelização são empregadas. Para utilizar essas técnicas, é preciso encontrar uma forma adequada de extrair o paralelismo. Esta extração pode ser feita através de um grafo representativo da aplicação. Neste trabalho são aplicados métodos de particionamento de grafos para otimizar as comunicações entre os processos que fazem parte de uma computação paralela. Nesse contexto, a alocação dos processos almeja minimizar a quantidade de comunicações entre processadores. Esta técnica é frequentemente adotada em Processamento de Alto Desempenho - PAD. No entanto, a construção de grafo geralmente está embutida no programa, cujas estruturas de dados privadas são empregadas na contrução do grafo. A proposta é usar ferramentas diretamente em programas MPI, empregando, apenas, os recursos padr ões da norma MPI 1.2. O objetivo é fornecer uma biblioteca (b -MPI) portável para o escalonamento estático de programas MPI. O escalonamento estático realizado pela biblioteca é feito através do mapeamento de processos Esse mapeamento busca agrupar os processos que trocam muitas informações em um mesma máquina, o que nesse caso diminui o volume de dados trafegados pela rede. O mapeamento será realizado estaticamente após uma execução prévia do programa MPI. As aplicações alvo para o uso da b -MPI são aquelas que mantêm o mesmo padrão de comunicação após execuções sucessivas. A validação da biblioteca foi realizada atrav és da Transformada Rápida de Fourier disponível no pacote FFTW, da resolução do Problema de Transferência de Calor através do Método de Schwarz e Multigrid e da Fatora ção LU implementada no benchmark HPL. Os resultados mostraram que a b -MPI pode ser utilizada para distribuir os processos e cientemente minimizando o volume de mensagens trafegadas pela rede.
A good performance of a parallel application is obtained according to the mode as the parallelization techniques are applied. To make use of these techniques, is necessary to nd an appropriate way to extract the parallelism. This extraction can be done through a representative graph of the application. In this work, methods of partitioning graphs are applied to optimize the communication between processes that belong to a parallel computation. In this context, the processes allocation aims to minimize the communication amount between processors. This technique is frequently adopted in High Performance Computing - HPC. However, the graph building is generally inside the program, that has private data structures employed in the graph building. The proposal is to utilize tools directly in MPI programs, employing only standard resources of the MPI 1.2 norm. The goal is to provide a portable library (b -MPI) to static schedule MPI programs. The static scheduling realized by the library is done through the mapping of processes. This mapping seeks to cluster the processes that exchange a lot of information in the same machine that, in this case decreases the data volume passed through the net. The mapping will be done staticly after a previous execution of a MPI program. The target applications to make use of b -MPI are those whose keep the same communication pattern after successives executions. The library validation is done through the available applications in the FFTW package, the solving of the problem of Heat Transference through the Additive Schwarz Method and Multigrid and the LU factorization implemented in the HPL benchmark. The results show that b -MPI can be utilized to distribute the processes ef ciently minimizing the volume of messages exchanged through the network.
APA, Harvard, Vancouver, ISO, and other styles
12

Helders, Fredrik. "Visualizing Carrier Aggregation Combinations." Thesis, Linköpings universitet, Kommunikationssystem, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160132.

Full text
Abstract:
As wireless communications is becoming an increasingly important part of ourevery day lives, the amount of transmitted data is constantly growing, creating ademand for ever-increasing data rates. One of the technologies used for boostingdata rates is carrier aggregation, which allows for wireless units to combine multipleconnections to the cellular network. However, there is a limited number ofpossible combinations defined, meaning that there is a need to search for the bestcombination in any given setup. This thesis introduces software capable of organizingthe defined combinations into tree structures, simplifying the search foroptimal combinations as well as allowing for visualizations of the connectionspossible. In the thesis, a proposed method of creating these trees is presented,together with suggestions on how to visualize important combination characteristics.Studies has also been made on different tree traversal algorithms, showingthat there is little need for searching through all possible combinations, but thata greedy approach has a high performance while substantially limiting the searchcomplexity.
I samband med att trådlösa kommunikationssystem blir en allt större del av våraliv och mängden data som skickas fortsätter att stiga, skapas en efterfrågan förökade datatakter. En av teknologierna som används för att skapa högre datatakterär bäraraggregering (carrier aggregation), som möjliggör för trådlösa enheteratt kombinera flertalet uppkopplingar mot det mobila nätverket. Det finns dockbara ett begränsat antal kombinationer definierade, vilket skapar ett behov av attsöka upp den bästa kombinationen i varje givet tillfälle. Detta arbete introducerarmjukvara som organiserar dessa kombinationer i trädstrukturer, vilket förenklarsökning efter optimala kombinationer tillsammans med möjligheten att visualiserade potentiella uppkopplingarna. I arbetet presenteras en föreslagen metodför att skapa dessa träd, tillsammans med uppslag på hur viktiga egenskaperhos kombinationerna kan visualiseras. Olika trädsökningsalgoritmer har ocksåundersökts, och det visas att det inte är nödvändigt att söka igenom hela träd.Istället visar sig giriga algoritmer ha hög prestanda, samtidigt som sökstorlekenkan hållas kraftigt begränsad.
APA, Harvard, Vancouver, ISO, and other styles
13

Fourestier, Sébastien. "Redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00873816.

Full text
Abstract:
Cette thèse traite du problème de la redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille. Nous présentons tout d'abord un état de l'art des algorithmes permettant de résoudre les problèmes du partitionnement, du repartitionnement, du placement statique et du re-placement. Notre première contribution vise à étudier, dans un cadre séquentiel, les caractéristiques algorithmiques souhaitables pour les méthodes parallèles de repartitionnement. Nous y présentons notre contribution à la conception d'un schéma multi-niveaux k-aire pour le calcul sequentiel de repartitionnements. La partie la plus exigeante de cette adaptation concerne la phase d'expansion. L'une de nos contributions majeures a été de nous inspirer des méthodes d'influence afin d'adapter un algorithme de raffinement par diffusion au problème du repartitionnement.Notre deuxième contribution porte sur la mise en oeuvre de ces méthodes sur machines parallèles. L'adaptation du schéma multi-niveaux parallèle a nécessité une évolution des algorithmes et des structures de données mises en oeuvre pour le partitionnement. Ce travail est accompagné d'une analyse expérimentale, qui est rendue possible grâce à la mise en oeuvre des algorithmes considérés au sein de la bibliothèque Scotch.
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Ameri, Shehab Ahmed. "A framework for assessing robustness of water networks and computational evaluation of resilience." Thesis, Cranfield University, 2016. http://dspace.lib.cranfield.ac.uk/handle/1826/12334.

Full text
Abstract:
Arid regions tend to take careful measures to ensure water supplies are secured to consumers, to help provide the basis for further development. The distribution network is the most expensive part of the water supply infrastructure and it must maintain performance during unexpected incidents. Many aspects of performance have previously been discussed separately, including reliability, vulnerability, flexibility and resilience. This study aimed to develop a framework to bring together these aspects as found in the literature and industry practice, and bridge the gap between them. Semi-structured interviews with water industry experts were used to examine the presence and understanding of robustness factors. Thematic analysis was applied to investigate these and inform a conceptual framework including the component and topological levels. Robustness was described by incorporating network reliability and resiliency. The research focused on resiliency as a network-level concept derived from flexibility and vulnerability. To utilise this new framework, the study explored graph theory to formulate metrics for flexibility and vulnerability that combine network topology and hydraulics. The flexibility metric combines hydraulic edge betweenness centrality, representing hydraulic connectivity, and hydraulic edge load, measuring utilised capacity. Vulnerability captures the impact of failures on the ability of the network to supply consumers, and their sensitivity to disruptions, by utilising node characteristics, such as demand, population and alternative supplies. These measures together cover both edge (pipe) centric and node (demand) centric perspectives. The resiliency assessment was applied to several literature benchmark networks prior to using a real case network. The results show the benefits of combining hydraulics with topology in robustness analysis. The assessment helps to identify components or sections of importance for future expansion plans or maintenance purposes. The study provides a novel viewpoint overarching the gap between literature and practice, incorporating different critical factors for robust performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Vuchener, Clement. "Equilibrage de charges dynamique avec un nombre variable de processeurs basé sur des méthodes de partitionnement de graphe." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0012/document.

Full text
Abstract:
L'équilibrage de charge est une étape importante conditionnant les performances des applications parallèles. Dans le cas où la charge varie au cours de la simulation, il est important de redistribuer régulièrement la charge entre les différents processeurs. Dans ce contexte, il peut s'avérer pertinent d'adapter le nombre de processeurs au cours d'une simulation afin d'obtenir une meilleure efficacité, ou de continuer l'exécution quand toute la mémoire des ressources courantes est utilisée. Contrairement au cas où le nombre de processeurs ne varie pas, le rééquilibrage dynamique avec un nombre variable de processeurs est un problème peu étudié que nous abordons ici.Cette thèse propose différentes méthodes basées sur le repartitionnement de graphe pour rééquilibrer la charge tout en changeant le nombre de processeurs. Nous appelons ce problème « repartitionnement M x N ». Ces méthodes se décomposent en deux grandes étapes. Dans un premier temps, nous étudions la phase de migration et nous construisons une « bonne » matrice de migration minimisant plusieurs critères objectifs comme le volume total de migration et le nombre total de messages échangés. Puis, dans un second temps, nous utilisons des heuristiques de partitionnement de graphe pour calculer une nouvelle distribution optimisant la migration en s'appuyant sur les résultats de l'étape précédente. En outre, nous proposons un algorithme de partitionnement k-aire direct permettant d'améliorer le partitionnement biaisé. Finalement, nous validons cette thèse par une étude expérimentale en comparant nos méthodes aux partitionneursactuels
Load balancing is an important step conditioning the performance of parallel programs. If the workload varies drastically during the simulation, the load must be redistributed regularly among the processors. Dynamic load balancing is a well studied subject but most studies are limited to an initially fixed number of processors. Adjusting the number of processors at runtime allows to preserve the parallel code efficiency or to keep running the simulation when the memory of the current resources is exceeded.In this thesis, we propose some methods based on graph repartitioning in order to rebalance the load while changing the number of processors. We call this problem \M x N repartitioning". These methods are split in two main steps. Firstly, we study the migration phase and we build a \good" migration matrix minimizing several metrics like the migration volume or the number of exchanged messages. Secondly, we use graph partitioning heuristics to compute a new distribution optimizing the migration according to the previous step results. Besides, we propose a direct k-way partitioning algorithm that allows us to improve our biased partitioning. Finally, an experimental study validates our algorithms against state-of-the-art partitioning tools
APA, Harvard, Vancouver, ISO, and other styles
16

Sattar, Naw Safrin. "Scalable Community Detection using Distributed Louvain Algorithm." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2640.

Full text
Abstract:
Community detection (or clustering) in large-scale graph is an important problem in graph mining. Communities reveal interesting characteristics of a network. Louvain is an efficient sequential algorithm but fails to scale emerging large-scale data. Developing distributed-memory parallel algorithms is challenging because of inter-process communication and load-balancing issues. In this work, we design a shared memory-based algorithm using OpenMP, which shows a 4-fold speedup but is limited to available physical cores. Our second algorithm is an MPI-based parallel algorithm that scales to a moderate number of processors. We also implement a hybrid algorithm combining both. Finally, we incorporate dynamic load-balancing in our final algorithm DPLAL (Distributed Parallel Louvain Algorithm with Load-balancing). DPLAL overcomes the performance bottleneck of the previous algorithms, shows around 12-fold speedup scaling to a larger number of processors. Overall, we present the challenges, our solutions, and the empirical performance of our algorithms for several large real-world networks.
APA, Harvard, Vancouver, ISO, and other styles
17

Hong, Changwan. "Code Optimization on GPUs." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mastio, Matthieu. "Modèles de distribution pour la simulation de trafic multi-agent." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1147/document.

Full text
Abstract:
L'analyse et la prévision du comportement des réseaux de transport sont aujourd'hui des éléments cruciaux pour la mise en place de politiques de gestion territoriale. La simulation informatique du trafic routier est un outil puissant permettant de tester des stratégies de gestion avant de les déployer dans un contexte opérationnel. La simulation du trafic à l'échelle d'un ville requiert cependant une puissance de calcul très importante, dépassant les capacité d'un seul ordinateur.Dans cette thèse, nous étudions des méthodes permettant d'effectuer des simulations de trafic multi-agent à large échelle. Nous proposons des solutions permettant de distribuer l'exécution de telles simulations sur un grand nombre de coe urs de calcul. L'une d'elle distribue directement les agents sur les coeurs disponibles, tandis que la seconde découpe l'environnement sur lequel les agents évoluent. Les méthodes de partitionnement de graphes sont étudiées à cet effet, et nous proposons une procédure de partitionnement spécialement adaptée à la simulation de trafic multi-agent. Un algorithme d'équilibrage de charge dynamique est également développé, afin d'optimiser les performances de la distribution de la simulation microscopique.Les solutions proposées ont été éprouvées sur un réseau réel représentant la zone de Paris-Saclay.Ces solutions sont génériques et peuvent être appliquées sur la plupart des simulateurs existants.Les résultats montrent que la distribution des agents améliore grandement les performances de la simulation macroscopique, tandis que le découpage de l'environnement est plus adapté à la simulation microscopique. Notre algorithme d'équilibrage de charge améliore en outre significativement l'efficacité de la distribution de l'environnement
Nowadays, analysis and prediction of transport network behavior are crucial elements for the implementation of territorial management policies. Computer simulation of road traffic is a powerful tool for testing management strategies before deploying them in an operational context. Simulation of city-wide traffic requires significant computing power exceeding the capacity of a single computer.This thesis studies the methods to perform large-scale multi-agent traffic simulations. We propose solutions allowing the distribution of such simulations on a large amount of computing cores.One of them distributes the agents directly on the available cores, while the second splits the environment on which the agents evolve. Graph partitioning methods are studied for this purpose, and we propose a partitioning procedure specially adapted to the multi-agent traffic simulation. A dynamic load balancing algorithm is also developed to optimize the performance of the microscopic simulation distribution.The proposed solutions have been tested on a real network representing the Paris-Saclay area.These solutions are generic and can be applied to most existing simulators.The results show that the distribution of the agents greatly improves the performance of the macroscopic simulation, whereas the environment distribution is more suited to microscopic simulation. Our load balancing algorithm also significantly improves the efficiency of the environment based distribution
APA, Harvard, Vancouver, ISO, and other styles
19

Jule, Alan. "Etude des codes en graphes pour le stockage de données." Thesis, Cergy-Pontoise, 2014. http://www.theses.fr/2014CERG0739.

Full text
Abstract:
Depuis deux décennies, la révolution technologique est avant tout numérique entrainant une forte croissance de la quantité de données à stocker. Le rythme de cette croissance est trop importante pour les solutions de stockage matérielles, provoquant une augmentation du coût de l'octet. Il est donc nécessaire d'apporter une amélioration des solutions de stockage ce qui passera par une augmentation de la taille des réseaux et par la diminution des copies de sauvegarde dans les centres de stockage de données. L'objet de cette thèse est d'étudier l'utilisation des codes en graphe dans les réseaux de stockage de donnée. Nous proposons un nouvel algorithme combinant construction de codes en graphe et allocation des noeuds de ce code sur le réseau. Cet algorithme permet d'atteindre les hautes performances des codes MDS en termes de rapport entre le nombre de disques de parité et le nombre de défaillances simultanées pouvant être corrigées sans pertes (noté R). Il bénéficie également des propriétés de faible complexité des codes en graphe pour l'encodage et la reconstruction des données. De plus, nous présentons une étude des codes LDPC Spatiallement-Couplés permettant d'anticiper le comportement de leur décodage pour les applications de stockage de données.Il est généralement nécessaire de faire des compromis entre différents paramètres lors du choix du code correcteur d'effacement. Afin que ce choix se fasse avec un maximum de connaissances, nous avons réalisé deux études théoriques comparatives pour compléter l'état de l'art. La première étude s'intéresse à la complexité de la mise à jour des données dans un réseau dynamique établi et déterminons si les codes linéaires utilisés ont une complexité de mise à jour optimale. Dans notre seconde étude, nous nous sommes intéressés à l'impact sur la charge du réseau de la modification des paramètres du code correcteur utilisé. Cette opération peut être réalisée lors d'un changement du statut du fichier (passage d'un caractère hot à cold par exemple) ou lors de la modification de la taille du réseau. L'ensemble de ces études, associé au nouvel algorithme de construction et d'allocation des codes en graphe, pourrait mener à la construction de réseaux de stockage dynamiques, flexibles avec des algorithmes d'encodage et de décodage peu complexes
For two decades, the numerical revolution has been amplified. The spread of digital solutions associated with the improvement of the quality of these products tends to create a growth of the amount of data stored. The cost per Byte reveals that the evolution of hardware storage solutions cannot follow this expansion. Therefore, data storage solutions need deep improvement. This is feasible by increasing the storage network size and by reducing data duplication in the data center. In this thesis, we introduce a new algorithm that combines sparse graph code construction and node allocation. This algorithm may achieve the highest performance of MDS codes in terms of the ratio R between the number of parity disks and the number of failures that can be simultaneously reconstructed. In addition, encoding and decoding with sparse graph codes helps lower the complexity. By this algorithm, we allow to generalize coding in the data center, in order to reduce the amount of copies of original data. We also study Spatially-Coupled LDPC (SC-LDPC) codes which are known to have optimal asymptotic performance over the binary erasure channel, to anticipate the behavior of these codes decoding for distributed storage applications. It is usually necessary to compromise between different parameters for a distributed storage system. To complete the state of the art, we include two theoretical studies. The first study deals with the computation complexity of data update and we determine whether linear code used for data storage are update efficient or not. In the second study, we examine the impact on the network load when the code parameters are changed. This can be done when the file status changes (from a hot status to a cold status for example) or when the size of the network is modified by adding disks. All these studies, combined with the new algorithm for sparse graph codes, could lead to the construction of new flexible and dynamical networks with low encoding and decoding complexities
APA, Harvard, Vancouver, ISO, and other styles
20

Abu-Aisheh, Zeina. "Approches anytime et distribuées pour l'appariment de graphes." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR4024/document.

Full text
Abstract:
En raison de la capacité et de l'amélioration des performances informatiques, les représentations structurelles sont devenues de plus en plus populaires dans le domaine de la reconnaissance de formes (RF). Quand les objets sont structurés à base de graphes, le problme de la comparaison d'objets revient à un problme d'appariement de graphes (Graph Matching). Au cours de la dernière décennie, les chercheurs travaillant dans le domaine de l'appariement de graphes ont porté une attention particulière à la distance d'édition entre graphes (GED), notamment pour sa capacité à traiter différent types de graphes. GED a été ainsi appliquée sur des problématiques spécifiques qui varient de la reconnaissance de molécules à la classi fication d'images
Due to the inherent genericity of graph-based representations, and thanks to the improvement of computer capacities, structural representations have become more and more popular in the field of Pattern Recognition (PR). In a graph-based representation, vertices and their attributes describe objects (or part of them) while edges represent interrelationships between the objects. Representing objects by graphs turns the problem of object comparison into graph matching (GM) where correspondences between vertices and edges of two graphs have to be found
APA, Harvard, Vancouver, ISO, and other styles
21

Катюха, Ігор Анатолійович. "Прогнозні моделі електричних навантажень розподільчих мереж в умовах невизначеності вихідної інформації." Thesis, Таврійський державний агротехнологічний університет, 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/31033.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.14.02 «Електричні станції, мережі і системи» (141 Електроенергетика, електротехніка та електромеханіка). – Таврійський державний агротехнологічний університет МОН України, Мелітополь. – Національний технічний університет «Харківський політехнічний інститут» МОН України, Харків, 2017. Дисертаційна робота присвячена розв’язанню актуальної науково- прикладної задачі розроблення сучасного науково- методичного апарату прогнозу споживання електричної енергії в умовах невизначеності, що враховує можливості інформаційного забезпечення автоматизованих систем комерційного обліку електроенергії та особливості окремих споживачів і має на меті підвищення ефективності використання та збереження електричної енергії. Вдосконалено метод нечіткого регресійного аналізу для побудови довгострокових прогнозних моделей електричних навантажень в розподільчих мережах. Враховано паритетну участь двох критеріїв ефективності нечітких моделей: ступіня сумісності та ступіня нечіткості при побудові прогнозних моделей. Розроблено метод корекції довгострокових прогнозних моделей для короткострокового прогнозу. Запропоновано підхід до побудови виду прогнозних моделей при будь-яких типах навантажень. Аналітично встановлено зв'язок нечітких показників точності прогнозу з відносною середньомодульною похибкою. Розроблену методику апробовано при розробці прогнозних моделей електричного навантаження ряду споживачів з різними типами графіків навантаження. Основні результати дисертації знайшли практичне застосування у вигляді програмно-апаратного комплексу для автоматизації процесу довгострокового та II оперативного прогнозування електричних навантажень електроспоживачів, що може інтегруватись інформаційно в автоматизовану систему обліку електроенергії, а також при оперативному керуванні режимами розподільчих мереж. Внаслідок проведення дослідження за темою дисертаційної роботи одержані такі наукові результати: – удосконалено метод отримання прогнозних моделей електричних навантажень, який відрізняється шляхом побудови критерію ступеню сумісності нечіткої регресії на основі перетину нечітких чисел, який дає можливість розкрити невизначеність вихідних даних та підвищити якість прогнозу електричних навантажень. – вперше запропоновано метод уніфікації виду прогнозних моделей, який відрізняється тим, що в добовому графіку електроспоживання виділено функціональні ділянки, з роздільним використанням для них нечіткого регресійного аналізу, завдяки чому можна отримати вид прогнозних моделей при будь-яких навантаженнях. – вперше запропоновано метод підвищення адекватності моделей, отриманих на базі нечіткого регресійного аналізу, який відрізняється паритетним врахуванням ступеню сумісності та ступеню нечіткості опису, що дозволить зробити прогноз електричних навантажень більш якісним. – вперше аналітично визначено метод визначення ефективності прогнозу електроспоживання в електричних мережах, який дозволяє виконувати порівняльний аналіз нечітких регресійних моделей прогнозу з моделями, отриманими іншими методами. Практичне значення одержаних результатів для електроенергетичної галузі полягає в розробленій методики прогнозування електроспоживання на базі нечіткого регресійного аналізу, що включає принципи побудови прогнозних моделей, алгоритмічне забезпечення, а також програмне забезпечення, реалізоване у зручній для інтегрування в АСКОЕ формі. Результати досліджень впроваджені та підтверджені відповідними актами: III – ПП «Молокозавод-ОЛКОМ» щодо аналізу електроспоживання та оперативного керування з метою зменшення втрат енергії в мережі внутрішньозаводського електропостачання . – у навчальному процесі основні результати роботи включені у відповідні дисципліни – «Практична інженерна підготовка» та «Інженерна діяльність» за спеціальністю: 8.10010101 – Енергетика сільськогосподарського виробництва. – Приазовським РЕМ ВАТ «Запоріжжяобленерго» використаний апаратно-програмний комплекс для процесу довгострокового та короткострокового прогнозування електричних навантажень та оперативного керування режимами розподільчих мереж. Основні наукові положення за матеріалами дисертаційної роботи опубліковані у 15 друкованих працях, з них: 9 статей у наукових фахових виданнях України (5 – у виданнях, включених до міжнародних наукометричних баз, 1 ¬– у виданні з індексом SCORUS), 6 – у матеріалах конференцій та семінарів. Дисертаційна робота складається зі вступу, анотації, чотирьох розділів,ї висновків, списку використаних джерел та додатків. Загальний обсяг дисертації складає 190 стор., серед них: 18 рисунки по тексту, 20 таблиць по тексту, список використаних джерел містить 115 найменувань на 12 сторінках, додатки на 19 сторінках.
The thesis for getting scientific degree of the Candidate of technical science on the specialty 05.14.02 «Electric power stations, network and system» (141 Electrical energetics, electrical engineering and electromechanics). – Tavricheskiy State Agrotechnology University, Melitopol, Ukraine National Technical University «Kharkiv Polytechnic Institute» MES of Ukraine, Kharkiv, 2017. The dissertation devoted to solving actual scientific and technical problem of development of modern science and analytical tools forecast electricity consumption in the face of uncertainty, taking into account the possibility of information support automated systems of commercial accounting of electric power and features of individual consumers and aims to increase efficiency and save electricity. Improved method of fuzzy regression analysis to build long–term forecasting models of electric loads in distribution networks. Included balanced participation of two performance criteria fuzzy models: the degree of combination and degree of fuzziness when building predictive models. The method of correction term predictive models for short–term forecast. An approach to building the type of predictive models in any type of stress. Analytical contacted fuzzy indicators forecast accuracy Mean absolute percentage error. The technique was tested in the development of predictive models of electric load number of consumers with different types of load charts. The main results of the dissertation found practical application in the form of hardware and software to automate the process of long–term and operational forecasting electricity consumers of electrical loads that can integrate information into an automated accounting system of electricity, as well as operational management mode of distribution networks. VII As a result of the research on the theme of the thesis, the following scientific results were obtained: - the method for obtaining predictive models of electrical loads is improved, which is distinguished by constructing a fuzzy regression compatibility criterion based on the intersection of fuzzy numbers, which makes it possible to uncover the uncertainty of the output data and improve the quality of the prediction of electrical loads. - for the first time the method of unification of the type of forecasting models is proposed, which differs in the fact that in the daily schedule of electricity consumption allocated functional areas, with the separate use for them of fuzzy regression analysis, which allows you to get the kind of forecast models at all loads. - for the first time the method of increasing the adequacy of models obtained on the basis of fuzzy regression analysis, which differs from the parity considering the degree of compatibility and the degree of fuzzy description, which will make prediction of electrical loads more qualitative. - for the first time an analytical method for determining the efficiency of the forecast of electricity consumption in electric networks is determined, which allows to perform a comparative analysis of fuzzy regression models of the forecast with models obtained by other methods. The practical significance of the obtained results of work for the electric power industrylies in the developed method of forecasting electricity consumption based on fuzzy regression analysis, which includes the principles of constructing predictive models, algorithmic support, as well as software implemented in a convenient for integration in AMSCA form. The research results are implemented and confirmed by the relevant acts: - PE "Molokozavod-OLKOM" for the analysis of electricity consumption and operational control in order to reduce energy losses in the network of intra-electric power supply. VIII - in the educational process, the main results of the work are included in the relevant disciplines - "Practical engineering training" and "Engineering activities" in the specialty: 8.10010101 - Power engineering of agricultural production. - Azov REM OJSC "Zaporizhiaoblenergo" used a hardware and software complex for the process of long-term and short-term prediction of electrical loads and operational control of the modes of distribution networks. The basic scientific positions on the materials of the dissertation work are published in 15 printed works, including 9 articles in the scientific professional editions of Ukraine (5 in the publications included in the international science-computer bases, 1 ¬ in the edition with the index SCORUS), 6 in the materials of the conferences and workshops. Thesis consists of introduction, abstract, four sections, conclusions, list of used sources and applications. The total volume of the dissertation is 190 pages, among them: 18 figures in the text, 20 tables in the text, the list of used sources contains 115 titles in 12 pages, applications in 19 pages.
APA, Harvard, Vancouver, ISO, and other styles
22

Perarnau, Swann. "Environnements pour l'analyse expérimentale d'applications de calcul haute performance." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00650047.

Full text
Abstract:
Les machines du domaine du calcul haute performance (HPC) gagnent régulièrement en com- plexité. De nos jours, chaque nœud de calcul peut être constitué de plusieurs puces ou de plusieurs cœurs se partageant divers caches mémoire de façon hiérarchique. Que se soit pour comprendre les performances ob- tenues par une application sur ces architectures ou pour développer de nouveaux algorithmes et valider leur performance, une phase d'expérimentation est souvent nécessaire. Dans cette thèse, nous nous intéressons à deux formes d'analyse expérimentale : l'exécution sur machines réelles et la simulation d'algorithmes sur des jeux de données aléatoires. Dans un cas comme dans l'autre, le contrôle des paramètres de l'environnement (matériel ou données en entrée) permet une meilleure analyse des performances de l'application étudiée. Ainsi, nous proposons deux méthodes pour contrôler l'utilisation par une application des ressources ma- térielles d'une machine : l'une pour le temps processeur alloué et l'autre pour la quantité de cache mémoire disponible. Ces deux méthodes nous permettent notamment d'étudier les changements de comportement d'une application en fonction de la quantité de ressources allouées. Basées sur une modification du compor- tement du système d'exploitation, nous avons implémenté ces méthodes pour un système Linux et démontré leur utilité dans l'analyse de plusieurs applications parallèles. Du point de vue de la simulation, nous avons étudié le problème de la génération aléatoire de graphes orientés acycliques (DAG) pour la simulation d'algorithmes d'ordonnancement. Bien qu'un grand nombre d'algorithmes de génération existent dans ce domaine, la plupart des publications repose sur des implémen- tations ad-hoc et peu validées de ces derniers. Pour pallier ce problème, nous proposons un environnement de génération comprenant la majorité des méthodes rencontrées dans la littérature. Pour valider cet envi- ronnement, nous avons réalisé de grande campagnes d'analyses à l'aide de Grid'5000, notamment du point de vue des propriétés statistiques connues de certaines méthodes. Nous montrons aussi que la performance d'un algorithme est fortement influencée par la méthode de génération des entrées choisie, au point de ren- contrer des phénomènes d'inversion : un changement d'algorithme de génération inverse le résultat d'une comparaison entre deux ordonnanceurs.
APA, Harvard, Vancouver, ISO, and other styles
23

Катюха, Ігор Анатолійович. "Прогнозні моделі електричних навантажень розподільчих мереж в умовах невизначеності вихідної інформації." Thesis, НТУ "ХПІ", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/31032.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.14.02 – електричні станції, системи та мережі. – Таврійський державний агротехнологічний університет, Мелітополь, 2017. Дисертаційна робота присвячена розв’язанню актуальної науково-прикладної задачі розроблення сучасного науково-методичного апарату прогнозу споживання електричної енергії в умовах невизначеності, що враховує можливості інформаційного забезпечення автоматизованих систем комерційного обліку електроенергії та особливості окремих споживачів і має на меті підвищення ефективності використання та збереженняелектричної енергії. Вдосконалено метод нечіткого регресійного аналізу для побудови довгострокових прогнозних моделей електричних навантажень в розподільчих мережах. Враховано паритетну участь двох критеріїв ефективності нечітких моделей: степені суміщення та степені нечіткості при побудові прогнозних моделей. Розроблено метод корекції довгострокових прогнозних моделей для короткострокового прогнозу. Запропоновано підхід до побудови виду прогнозних моделей при будь-яких типах навантажень. Аналітично встановлено зв'язок нечітких показників точності прогнозу з відносною середньомодульною похибкою. Розроблену методику апробовано при розробці прогнозних моделей електричного навантаження ряду споживачів з різними типами графіків навантаження. Основні результати дисертації знайшли практичне застосування у вигляді програмно-апаратного комплексу для автоматизації процесу довгострокового та оперативного прогнозування електричних навантажень електроспоживачів, що може інтегруватись інформаційно в автоматизовану систему обліку електроенергії, а також при оперативному керуванні режимами розподільчих мереж.
Dissertation for scientific degree of candidate of technical sciences, specialty 05.14.02 – electric power stations, networks and systems. – Tavricheskiy State Agrotechnology University, Melitopol, 2017. The dissertation devoted to solving actual scientific and technical problem of development of modern science and analytical tools forecast electricity consumption in the face of uncertainty, taking into account the possibility of information support automated systems of commercial accounting of electric power and features of individual consumers and aims to increase efficiency and save electricity. Improved method of fuzzy regression analysis to build long–term forecasting models of electric loads in distribution networks. Included balanced participation of two performance criteria fuzzy models: the degree of combination and degree of fuzziness when building predictive models. The method of correction term predictive models for short–term forecast. An approach to building the type of predictive models in any type of stress. Analytical contacted fuzzy indicators forecast accuracy Mean absolute percentage error. The technique was tested in the development of predictive models of electric load number of consumers with different types of load charts. The main results of the dissertation found practical application in the form of hardware and software to automate the process of long–term and operational forecasting electricity consumers of electrical loads that can integrate information into an automated accounting system of electricity, as well as operational management mode of distribution networks.
APA, Harvard, Vancouver, ISO, and other styles
24

Mert, Mecnun. "Effect Of High Hydrostatic Pressure On Microbial Load And Quality Parameters Of Grape Juice." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611536/index.pdf.

Full text
Abstract:
Effect of high hydrostatic pressure (150-200-250 MPa) on the microbial load and quality parameters (pH, color, 5-hydroxymethylfurfural-HMF) of white (Sultaniye) and red (Alicante Bouschet) grape juices with combination of temperature (20-30-40°
C) and holding time (5-10-15 min) was studied. Increased pressure and temperature showed significant effect on microbial reduction in white and red grape juices (p<
0.05). The effect of pressure and time on pH drop was found to be insignificant (p>
0.05). HHP resulted in E<
1 for white grape and E<
7 for red grape juice samples. Shelf life analysis for HHP treated white grape juice (200 MPa-40°
C-10min) and red grape juice (250 MPa-40°
C-10min) revealed no microbial growth up to 90 days when stored at 25°
C. Although HMF formation was observed in industrially manufactured, pasteurized samples (65°
C for 30 min), no HMF was detected in HHP treated white and red grape juices. HHP at the suggested conditions can be recommended as a better production alternative to heat treatment for white and red grape juice with respect to microbial load and studied quality parameters even at temperatures lower than required for pasteurization.
APA, Harvard, Vancouver, ISO, and other styles
25

Casadei, Astrid. "Optimisations des solveurs linéaires creux hybrides basés sur une approche par complément de Schur et décomposition de domaine." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0186/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à la résolution parallèle de grands systèmes linéaires creux. Nous nous focalisons plus particulièrement sur les solveurs linéaires creux hybrides directs itératifs tels que HIPS, MaPHyS, PDSLIN ou ShyLU, qui sont basés sur une décomposition de domaine et une approche « complément de Schur ». Bien que ces solveurs soient moins coûteux en temps et en mémoire que leurs homologues directs, ils ne sont néanmoins pas exempts de surcoûts. Dans une première partie, nous présentons les différentes méthodes de réduction de la consommation mémoire déjà existantes et en proposons une nouvelle qui n’impacte pas la robustesse numérique du précondionneur construit. Cette technique se base sur une atténuation du pic mémoire par un ordonnancement spécifique des tâches de calcul, d’allocation et de désallocation des blocs, notamment ceux se trouvant dans les parties « couplage » des domaines.Dans une seconde partie, nous nous intéressons à la question de l’équilibrage de la charge que pose la décomposition de domaine pour le calcul parallèle. Ce problème revient à partitionner le graphe d’adjacence de la matrice en autant de parties que de domaines désirés. Nous mettons en évidence le fait que pour avoir un équilibrage correct des temps de calcul lors des phases les plus coûteuses d’un solveur hybride tel que MaPHyS, il faut à la fois équilibrer les domaines en termes de nombre de noeuds et de taille d’interface locale. Jusqu’à aujourd’hui, les partitionneurs de graphes tels que Scotch et MeTiS ne s’intéressaient toutefois qu’au premier critère (la taille des domaines) dans le contexte de la renumérotation des matrices creuses. Nous proposons plusieurs variantes des algorithmes existants afin de prendre également en compte l’équilibrage des interfaces locales. Toutes nos modifications sont implémentées dans le partitionneur Scotch, et nous présentons des résultats sur de grands cas de tests industriels
In this thesis, we focus on the parallel solving of large sparse linear systems. Our main interestis on direct-iterative hybrid solvers such as HIPS, MaPHyS, PDSLIN or ShyLU, whichrely on domain decomposition and Schur complement approaches. Althrough these solvers arenot as time and space consuming as direct methods, they still suffer from serious overheads. Ina first part, we thus present the existing techniques for reducing the memory consumption, andwe present a new method which does not impact the numerical robustness of the preconditioner.This technique reduces the memory peak by doing a special scheduling of computation, allocation,and freeing tasks in particular in the Schur coupling blocks of the matrix. In a second part,we focus on the load balancing of the domain decomposition in a parallel context. This problemconsists in partitioning the adjacency graph of the matrix in as many domains as desired. Wepoint out that a good load balancing for the most expensive steps of an hybrid solver such asMaPHyS relies on the balancing of both interior nodes and interface nodes of the domains.Through, until now, graph partitioners such as MeTiS or Scotch used to optimize only thefirst criteria (i.e., the balancing of interior nodes) in the context of sparse matrix ordering. Wepropose different variations of the existing algorithms to improve the balancing of interface nodesand interior nodes simultaneously. All our changes are implemented in the Scotch partitioner.We present our results on large collection of matrices coming from real industrial cases
APA, Harvard, Vancouver, ISO, and other styles
26

Kamfer, De Witt. "The effect of maturity and crop load on the browning and concentration of phenolic compounds of Thompson Seedless and Regal Seedless." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95886.

Full text
Abstract:
Thesis (MScAgric)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Thompson Seedless and Regal Seedless are two white seedless table grape cultivars widely produced in South Africa. Both cultivars are susceptible to berry browning, especially Regal Seedless. Browning leads to annual financial losses for table grape growers. Although a correlation between harvest maturity and the occurrence of browning seems to exist, it is still unclear whether maturity levels are the actual contributing factor. The aim of the study was to establish if harvest maturity and crop load could influence the occurrence of browning of both cultivars. The impact of harvest maturity and crop load on phenolic compound concentration in the berry skin of both cultivars was also investigated. Total external browning of Regal Seedless and Thompson Seedless occurred in much higher percentages than internal browning. Regal Seedless showed a tendency to decreased total external browning with harvest maturity. The main reason for this is that net-like browning, which is the greatest contributor to total external browning, decreased with harvest maturity, in all three seasons. External browning of Thompson Seedless increased with harvest maturity in both seasons. Contact browning was the greatest contributor to total external browning of Thompson Seedless. Crop load did not significantly influence berry browning of Regal Seedless or Thompson Seedless grapes. The flavan-3-ol concentration (catechin, epicatechin, procyanidin B1 and procyanidin B2) in Regal Seedless generally increased with harvest maturity, whereas in Thompson Seedless the general tendency was a decrease in the flavan-3-ol concentration with harvest maturity. The development of phenolic compound concentration with maturity could not be correlated with the occurrence of berry browning. Crop load did not affect flavan-3-ol concentration. When the flavan-3-ol concentration of Regal Seedless and Thompson Seedless were compared at different harvest maturities the concentrations of flavan-3-ols were clearly much higher in the skin of Regal Seedless than in the skin of Thompson Seedless (for both the 2008 & 2009 seasons). Comparison of the browning incidence with harvest maturity for these two cultivars (see above) clearly reveals that external browning of Regal Seedless occurred in much higher percentages than on Thompson Seedless. Regal Seedless had much higher levels of external browning than Thompson Seedless. The concentration of flavan-3-ols in the skin of white seedless cultivars may be an indication of the cultivar’s susceptibility to external browning.
AFRIKAANSE OPSOMMING: Thompson Seedless en Regal Seedless is twee wit pitlose tafeldruif kultivars wat ekstensief in Suid-Afrika verbou word. Verbruining kan ‘n probleem wees by beide kultivars, spesifiek Regal Seedless. Die faktore wat aanleiding gee tot verbruining is nog nie duidelik bepaal nie. Alhoewel dit lyk of daar ‘n korrelasie tussen rypheidsgraad van die oes en verbruining kan wees is dit steeds onduidelik of oesrypheidsvlakke die werklike oorsaak van verbruining is. Die doel van die studie was om vas te stel of die rypheidsgraad van die oes en oeslading verbruining van beide kultivars kan beïnvloed. Die effek van oes rypheidsgraad en oeslading op konsentrasie van fenoliese verbindings in die korrelskil van beide kultivars is ook ondersoek. Totale eksterne verbruining van Regal Seedless en Thompson Seedless het in baie hoër persentasies voorgekom as interne verbruining. Daar was ‘n tendens by Regal Seedless dat totale eksterne verbruining verminder het soos die oes ryper geraak het as gevolg van netagtige verbruining, wat die grootste bydrae tot totale eksterne verbruining veroorsaak het. Netagtige verbruining se voorkoms het verminder oor al drie seisoene. Eksterne verbruining van Thompson Seedless het toegeneem met oes rypheid in beide seisoene. Kontak verbruining het grootste byrdae gelewer tot totale eksterne verbruining van Thompson Seedless. Oeslading het nie ‘n betekenisvolle invloed op verbruining van Regal Seedless en Thompson Seedless gehad nie. Die flavan-3-ol (katesjien, epikatesjien, prosianidien B1 en prosianidien B2) konsentrasie van Regal Seedless het met oes rypheid toegeneem. By Thompson Seedless was daar ‘n afname in die flavan-3-ol konsentrasie met oes rypheid. Daar was geen korrrelasie tussen die konsentrasie van fenoliese verbinding en die voorkoms van verbruining vir beide kultivars. Oeslading het nie ‘n betekenisvolle effek op die konsentrasie van fenoliese verbindings gehad nie. Vergelyking van die flavan-3-ol konsentrasie van Regal Seedless en Thompson Seedless by verskillende rypheidsgrade wys dat die konsentrasie baie hoër in die korrel skil van Regal Seedless as in die van Thompson Seedless (vir beide 2008 & 2009 seisoene). Die vergelyking van die voorkoms van verbruining met oesrypheid van beide kultivars wys duidelik dat eksterne verbruining van Regal Seedless in baie hoër persentasies voorkom as in Thompson Seedless. Flavan-3-ol konsentrasie in die skil van wit pitlose kultivars kan ‘n aanduiding wees van die kultivar se moontlike risiko vir die voorkoms van eksterne verbruining.
APA, Harvard, Vancouver, ISO, and other styles
27

Hickey, Cain Charles. "Vines of different capacity and water status alter the sensory perception of Cabernet Sauvignon wines." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/42667.

Full text
Abstract:
Reducing disease and increasing fruit quality in vigorous vineyards with dense canopies is demanding of time and resources; unfortunately, vineyards of this nature are common in humid environments. This study investigated the effectiveness with which vine capacity and water status could be regulated as well as if they related to fruit quality and wine sensory perception. The treatments regulating vine size and water status were under-trellis groundcover, root manipulation, rootstocks, and irrigation. Treatments were arranged in a strip-split-split plot design before the introduction of the irrigation treatment resulted in incomplete replication in each block. Treatment levels were under-trellis cover crop (CC) compared to under-trellis herbicide (Herb); root restriction bags (RBG) compared to no root manipulation (NRM); three compared rootstocks (101-14, 420-A, riparia Gloire); low water stress (LOW) compared to high water stress (HIGH). Vines grown with RBG and CC regulated vegetative growth more so than conventional treatments, resulting in 56% and 23% greater cluster exposure flux availability (CEFA). High water stress (HIGH) and RBG reduced stem water potential and discriminated less against 13C. Vines grown with RBG and CC consistently reduced harvest berry weight by 17 and 6% compared to conventional treatments. Estimated phenolics were consistently increased by RBG and were correlated with berry weight, vine capacity and CEFA. Sensory attributes were significantly distinguishable between wines produced from vines that differed in both vine capacity and water status, amongst other responses. Treatments have been identified that can alter the sensory perception of wines, with the potential to improve wine quality.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
28

Hill, Brycen Thomas. "Root restriction, under-trellis cover cropping, and rootstock modify vine size and berry composition of Cabernet Sauvignon." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/75223.

Full text
Abstract:
Vineyards in the Mid-Atlantic often have large, vigorous vines that can be costly to manage and produce inadequate fruit for wine production. Dense canopies increase the incidence of fungal disease, require greater allocation of resources to manage, and inhibit fruit development. The primary objective of these studies was to determine effective vine-size modification treatments that would optimize fruit quality, while reducing labor and chemical control. Research factors included root manipulation, under-trellis ground cover, and rootstock. Treatment levels were root bag (RBG) or no root manipulation (NRM); under-trellis cover crop (CC) or herbicide strip (HERB); and one of three rootstocks: 101-14, Riparia Gloire, or 420-A. Effects of these treatments were measured in two experiments: Experiment I compared combinations of all three treatments, while Experiment II explored the individual effects of root restriction using root bags of varying volumes. Root restriction consistently demonstrated the ability to reduce vegetative growth and vine water status. In the first experiment fruit-zone photosynthetic photon flux density (PPFD) was increased by 234% in RBG vines. Timed canopy management tasks indicated that RBG canopies required about half the labor time of NRM canopies. Anthocyanin concentration and total phenolic content were increased by 20% and 19% respectively in RBG fruit. CC increased fruit-zone PPFD by 62%, and increased soluble solids and color compounds. The 420-A rootstock reduced potassium uptake, resulting in lower must potassium concentration. Results demonstrated that these treatments significantly reduce vegetative growth in a humid climate, decrease management labor, and produce higher quality fruit.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Delanaux, Rémy. "Intégration de données liées respectueuse de la confidentialité." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1303.

Full text
Abstract:
La confidentialité des données personnelles est un souci majeur et un problème peu étudié pour la publication de données dans le Web des données ouvertes (ou LOD cloud, pour Linked Open Data cloud) . Ce nuage formé par le LOD est un réseau d'ensembles de données interconnectés et accessibles publiquement sous la forme de graphes de données modélisés dans le format RDF, et interrogés via des requêtes écrites dans le langage SPARQL. Ce cadre très standardisé est très utilisé de nos jours par des organismes publics et des entreprises. Mais certains acteurs notamment du secteur privé sont toujours réticents à la publication de leurs données, découragés par des soucis potentiels de confidentialité. Pour pallier cela, nous présentons et développons un cadre formel déclaratif pour la publication de données liées respectant la confidentialité, dans lequel les contraintes de confidentialité et d'utilité des données sont spécifiées sous forme de politiques (des ensembles de requêtes SPARQL). Cette approche est indépendante des données et du graphe considéré, et consiste en l'analyse statique d'une politique de confidentialité et d'une politique d'utilité pour déterminer des séquences d'opérations d'anonymization à appliquer à n'importe quel graphe RDF pour satisfaire les politiques fournies. Nous démontrons la sûreté de nos algorithmes et leur efficacité en terme de performance via une étude expérimentale. Un autre aspect à prendre en compte est qu'un nouveau graphe publié dans le nuage LOD est évidemment exposé à des failles de confidentialité car il peut être relié à des données déjà publiées dans d'autres données liées. Dans le second volet de cette thèse, nous nous concentrons donc sur le problème de construction d'anonymisations *sûres* d'un graphe RDF garantissant que relier le graphe anonymisé à un graphe externe quelconque ne causera pas de brèche de confidentialité. En prenant un ensemble de requêtes de confidentialité en entrée, nous étudions le problème de sûreté indépendamment des données du graphe, et la construction d'une séquence d'opérations d'anonymisation permettant d'assurer cette sûreté. Nous détaillons des conditions suffisantes sous lesquelles une instance d'anonymisation est sûre pour une certaine politique de confidentialité fournie. Par ailleurs, nous montrons que nos algorithmes sont robustes même en présence de liens de type sameAs (liens d'égalité entre entités en RDF), qu'ils soient explicites ou inférés par de la connaissance externe. Enfin, nous évaluons l'impact de cette contribution assurant la sûreté de données en la testant sur divers graphes. Nous étudions notamment la performance de cette solution et la perte d'utilité causée par nos algorithmes sur des données RDF réelles comme synthétiques. Nous étudions d'abord les diverses mesures d'utilité existantes et nous en choisissons afin de comparer le graphe original et son pendant anonymisé. Nous définissons également une méthode pour générer de nouvelles politiques de confidentialité à partir d'une politique de référence, via des modifications incrémentales. Nous étudions le comportement de notre contribution sur 4 graphes judicieusement choisis et nous montrons que notre approche est efficace avec un temps très faible même sur de gros graphes (plusieurs millions de triplets). Cette approche est graduelle : le plus spécifique est la politique de confidentialité, le plus faible est son impact sur les données. Pour conclure, nous montrons via différentes métriques structurelles (adaptées aux graphes) que nos algorithmes ne sont que peu destructeurs, et cela même quand les politiques de confidentialité couvrent une grosse partie du graphe
Individual privacy is a major and largely unexplored concern when publishing new datasets in the context of Linked Open Data (LOD). The LOD cloud forms a network of interconnected and publicly accessible datasets in the form of graph databases modeled using the RDF format and queried using the SPARQL language. This heavily standardized context is nowadays extensively used by academics, public institutions and some private organizations to make their data available. Yet, some industrial and private actors may be discouraged by potential privacy issues. To this end, we introduce and develop a declarative framework for privacy-preserving Linked Data publishing in which privacy and utility constraints are specified as policies, that is sets of SPARQL queries. Our approach is data-independent and only inspects the privacy and utility policies in order to determine the sequence of anonymization operations applicable to any graph instance for satisfying the policies. We prove the soundness of our algorithms and gauge their performance through experimental analysis. Another aspect to take into account is that a new dataset published to the LOD cloud is indeed exposed to privacy breaches due to the possible linkage to objects already existing in the other LOD datasets. In the second part of this thesis, we thus focus on the problem of building safe anonymizations of an RDF graph to guarantee that linking the anonymized graph with any external RDF graph will not cause privacy breaches. Given a set of privacy queries as input, we study the data-independent safety problem and the sequence of anonymization operations necessary to enforce it. We provide sufficient conditions under which an anonymization instance is safe given a set of privacy queries. Additionally, we show that our algorithms are robust in the presence of sameAs links that can be explicit or inferred by additional knowledge. To conclude, we evaluate the impact of this safety-preserving solution on given input graphs through experiments. We focus on the performance and the utility loss of this anonymization framework on both real-world and artificial data. We first discuss and select utility measures to compare the original graph to its anonymized counterpart, then define a method to generate new privacy policies from a reference one by inserting incremental modifications. We study the behavior of the framework on four carefully selected RDF graphs. We show that our anonymization technique is effective with reasonable runtime on quite large graphs (several million triples) and is gradual: the more specific the privacy policy is, the lesser its impact is. Finally, using structural graph-based metrics, we show that our algorithms are not very destructive even when privacy policies cover a large part of the graph. By designing a simple and efficient way to ensure privacy and utility in plausible usages of RDF graphs, this new approach suggests many extensions and in the long run more work on privacy-preserving data publishing in the context of Linked Open Data
APA, Harvard, Vancouver, ISO, and other styles
30

Kuo, Ming-Chia, and 郭明嘉. "An Efficient Dynamic Load-Balancing Large Scale Graph-Processing System." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/kpmh4n.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
106
Since the introduction of pregel by Google, several large-scale graphprocessing systems have been introduced. These systems are based on the bulk synchronous parallel model or other similar models and use various strategies to optimize system performance. For example, Mizan monitors the workload of each worker to determine whether the workload between the workers is balanced with respect to the execution time. If the workload is unbalanced among workers, Mizan migrates nodes from overloaded workers to under-loaded workers to balance the load among workers and minimize the total execution time. On the basis of Mizan’s migration plan, we implement a graph-processing system called GPSer with an efficient re-partitioning graph scheme. Our system uses statistical tools, e.g., coefficient of variation and correlation coefficient, to modify the migration plan and determine whether the workloads are balanced among all workers. Our system can accurately monitor current workloads and decide whether to migrate nodes among workers to balance the load. When imbalance arises, the workload of all workers can quickly converge to a balanced state, thereby enhancing the system performance. In experiment our system outperforms the state-of-the-art dynamic load-balancing graph processing-system, such as Mizan.
APA, Harvard, Vancouver, ISO, and other styles
31

Gao, Pu. "Generation and properties of random graphs and analysis of randomized algorithms." Thesis, 2010. http://hdl.handle.net/10012/4987.

Full text
Abstract:
We study a new method of generating random $d$-regular graphs by repeatedly applying an operation called pegging. The pegging algorithm, which applies the pegging operation in each step, is a method of generating large random regular graphs beginning with small ones. We prove that the limiting joint distribution of the numbers of short cycles in the resulting graph is independent Poisson. We use the coupling method to bound the total variation distance between the joint distribution of short cycle counts and its limit and thereby show that $O(\epsilon^{-1})$ is an upper bound of the $\eps$-mixing time. The coupling involves two different, though quite similar, Markov chains that are not time-homogeneous. We also show that the $\epsilon$-mixing time is not $o(\epsilon^{-1})$. This demonstrates that the upper bound is essentially tight. We study also the connectivity of random $d$-regular graphs generated by the pegging algorithm. We show that these graphs are asymptotically almost surely $d$-connected for any even constant $d\ge 4$. The problem of orientation of random hypergraphs is motivated by the classical load balancing problem. Let $h>w>0$ be two fixed integers. Let $\orH$ be a hypergraph whose hyperedges are uniformly of size $h$. To {\em $w$-orient} a hyperedge, we assign exactly $w$ of its vertices positive signs with respect to this hyperedge, and the rest negative. A $(w,k)$-orientation of $\orH$ consists of a $w$-orientation of all hyperedges of $\orH$, such that each vertex receives at most $k$ positive signs from its incident hyperedges. When $k$ is large enough, we determine the threshold of the existence of a $(w,k)$-orientation of a random hypergraph. The $(w,k)$-orientation of hypergraphs is strongly related to a general version of the off-line load balancing problem. The other topic we discuss is computing the probability of induced subgraphs in a random regular graph. Let $0
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Li-Wen, and 張力文. "Application of Grid Rational Algorithm for Predicting Hydrograph (GRAPH) Model in Suspended Load Estimation for a Watershed." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/24012197829253335659.

Full text
Abstract:
碩士
國立中興大學
水土保持學系所
103
Geologic condition in Chingshui River watershed became more fragile after Chi-Chi earthquake. Lots of sediments transported by the torrential floods during the following typhoon seasons cause huge hazards which threaten the protected targets located at the downstream areas. However, the suspend load has its uncertainties and difficult to measure during typhoon event. Alternatively, the indirect estimation of suspend load, which applicable provide early response and handling of disaster prevention, should be proposed. This study, a hydrologic model (Grid Rational Algorithm for Predicting Hydrograph, GRAPH) was used to simulate the runoff hydrograph in Chingshui watershed. Moreover, the suspended load during the typhoon and/or heavy rainfall event would be estimated from hourly discharge and compared with measure data. The results show that the events with high peak flow (return period greater than 5 years) perform better consistency in suspended load estimation (about 10% errors); while the events with medium peak flow (return period: 3-4 years) showing errors within the range of 30% to 50% due to sensitivity of rising and recession limb in hydrograph simulation. This study reveals that the model was suitable to estimate suspended load under high peak flow. The higher peak flow it was, the more accurate it has. The results show that there is a good accuracy for small error and variation in suspended load estimated using a smaller time scale. Relationship of suspended load and discharge will be ignored briefly while adopting larger time scale due to temporal dependence of variation in sediment concentration which varied with discharge, and the power relations existing between discharge and suspended load. Therefore, this study uses hourly discharge and suspended concentration to simulate amount of suspended load for each torrential event in order to have better accuracy estimation comparing with daily discharge. Simulation results show that the concentration of suspended load was affected significantly by the changes of watershed landslide. There is a negative relationship between correction coefficient α of the model and collapse rate while correction coefficient β showing positive correlation with collapse rate. These imply that the more collapse rate, the worse the conservation ability of water resources in watershed, and the higher sediment concentration in discharge.
APA, Harvard, Vancouver, ISO, and other styles
33

Liao, Chia-Ying, and 廖家瑩. "The Effect of Trigger-based Animated Instruction on Learning Achievement and Cognitive Load in Simple Quadratic Functions Graph." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/01176774279264291417.

Full text
Abstract:
碩士
國立交通大學
理學院科技與數位學習學程
98
Trigger-based Animated Instruction is based on the concept of cognitive load theory and multimedia learning theory. Based on the Trigger-based Animated Instruction, instructors design teaching materials corresponded to classroom instructions, which contribute to guiding attention more easily; and help students take the initiative to search, select and organize information; reduce the burden on working memory as well as cognitive load; and enhance learning. This study adopted a quasi-experimental design by using the simple quadratic function graph of mathematics teaching material, to examine learning achievement and cognitive load cross two different instructions, i.e. the trigger-base animated instruction and the traditional methodology. The result of this study shows that adopting the trigger-based animated instruction significantly helps students learn more effectively, and reduces the cognitive load. Additionally, the effect was more pronounced for high-achieving students.
APA, Harvard, Vancouver, ISO, and other styles
34

González, García José Luis. "Graph Partitioning for the Finite Element Method: Reducing Communication Volume with the Directed Sorted Heavy Edge Matching." Doctoral thesis, 2019. http://hdl.handle.net/11858/00-1735-0000-002E-E625-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bagchi, Arijit. "Modeling the Power Distribution Network of a Virtual City and Studying the Impact of Fire on the Electrical Infrastructure." Thesis, 2009. http://hdl.handle.net/1969.1/147910.

Full text
Abstract:
The smooth and reliable operation of key infrastructure components like water distribution systems, electric power systems, and telecommunications is essential for a nation?s economic growth and overall security. Tragic events such as the Northridge earthquake and Hurricane Katrina have shown us how the occurrence of a disaster can cripple one or more such critical infrastructure components and cause widespread damage and destruction. Technological advancements made over the last few decades have resulted in these infrastructure components becoming highly complicated and inter-dependent on each other. The development of tools which can aid in understanding this complex interaction amongst the infrastructure components is thus of paramount importance for being able to manage critical resources and carry out post-emergency recovery missions. The research work conducted as a part of this thesis aims at studying the effects of fire (a calamitous event) on the electrical distribution network of a city. The study has been carried out on a test bed comprising of a virtual city named Micropolis which was modeled using a Geographic Information System (GIS) based software package. This report describes the designing of a separate electrical test bed using Simulink, based on the GIS layout of the power distribution network of Micropolis. It also proposes a method of quantifying the damage caused by fire to the electrical network by means of a parameter called the Load Loss Damage Index (LLDI). Finally, it presents an innovative graph theoretic approach for determining how to route power across faulted sections of the electrical network using a given set of Normally Open switches. The power is routed along a path of minimum impedance. The proposed methodologies are then tested by running numerous simulations on the Micropolis test bed, corresponding to different fire spread scenarios. The LLDI values generated from these simulation runs are then analyzed in order to determine the most damaging scenarios and to identify infrastructure components of the city which are most crucial in containing the damage caused by fire to the electrical network. The conclusions thereby drawn can give useful insights to emergency response personnel when they deal with real-life disasters.
APA, Harvard, Vancouver, ISO, and other styles
36

廖慶榮. "= Efficient partitioning and load-balancing methods for finite element graphs on distributed memory multicomputer." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/53801561416999492772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ravindran, Rajeswaran Chockalingapuram. "Scheduling Heuristics for Maximizing the Output Quality of Iris Task Graphs in Multiprocessor Environment with Time and Energy Bounds." 2012. https://scholarworks.umass.edu/theses/826.

Full text
Abstract:
Embedded real time applications are often subject to time and energy constraints. Real time applications are usually characterized by logically separable set of tasks with precedence constraints. The computational effort behind each of the task in the system is responsible for a physical functionality of the embedded system. In this work we mainly define theoretical models for relating the quality of the physical func- tionality to the computational load of the tasks and develop optimization problems to maximize the quality of the system subject to various constraints like time and energy. Specifically, the novelties in this work are three fold. This work deals with maximizing the final output quality of a set of precedence constrained tasks whose quality can be expressed with appropriate cost functions. We have developed heuristic scheduling algorithms for maximizing the quality of final output of embedded applications. This work also dealswith the fact that the quality of output of a task in the system has noticeable effect on quality of output of the other dependent tasks in the system. Finally run time characteristics of the tasks are also modeled by simulating a distribution of run times for the tasks, which provides for averaged quality of output for the system rather than un-sampled quality based on arbitrary run times. Many real-time tasks fall into the IRIS (Increased Reward with Increased Service) category. Such tasks can be prematurely terminated at the cost of poorer quality output. In this work, we study the scheduling of IRIS tasks on multiprocessors. IRIS tasks may be dependent, with one task feeding other tasks in a Task Precedence Graph (TPG). Task output quality depends on the quality of the input data as well as on the execution time that is allowed. We study the allocation/scheduling of IRIS TPGs on multiprocessors to maximize output quality. The heuristics developed can effectively reclaim resources when tasks finish earlier than their estimated worst-case execution time. Dynamic voltage scaling is used to manage energy consumption and keep it within specified bounds.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!