Dissertations / Theses on the topic 'Large graph'

To see the other types of publications on this topic, follow the link: Large graph.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Henry, Tyson Rombauer. "Interactive graph layout: The exploration of large graphs." Diss., The University of Arizona, 1992. http://hdl.handle.net/10150/185833.

Full text
Abstract:
Directed and undirected graphs provide a natural notation for describing many fundamental structures of computer science. Unfortunately graphs are hard to draw in an easy to read fashion. Traditional graph layout algorithms have focused on creating good layouts for the entire graph. This approach works well with smaller graphs, but often cannot produce readable layouts for large graphs. This dissertation presents a novel methodology for viewing large graphs. The basic concept is to allow the user to interactively navigate through large graphs, learning about them in appropriately small and concise pieces. The motivation of this approach is that large graphs contain too much information to be conveyed by a single canonical layout. For a user to be able to understand the data encoded in the graph she must be able to carve up the graph into manageable pieces and then create custom layouts that match her current interests. An architecture is presented that supports graph exploration. It contains three new concepts for supporting interactive graph layout: interactive decomposition of large graphs, end-user specified layout algorithms, and parameterized layout algorithms. The mechanism for creating custom layout algorithms provides the non-programming end-user with the power to create custom layouts that are well suited for the graph at hand. New layout algorithms are created by combining existing algorithms in a hierarchical structure. This method allows the user to create layouts that accurately reflect the current data set and her current interests. In order to explore a large graph, the user must be able to break the graph into small, more manageable pieces. A methodology is presented that allows the user to apply graph traversal algorithms to large graphs to carve out reasonably sized pieces. Graph traversal algorithms can be combined using a visual programming language. This provides the user with the control to select subgraphs that are of particular interest to her. The ability to Parameterize layout algorithms provides the user with control over the layout process. The user can customize the generated layout by changing parameters to the layout algorithm. Layout algorithm parameterization is placed into an interactive framework that allows the user to iteratively fine tune the generated layout. As a proof of concept, examples are drawn from a working prototype that incorporates this methodology.
APA, Harvard, Vancouver, ISO, and other styles
2

Larsson, Patrik. "Analyzing and adapting graph algorithms for large persistent graphs." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15422.

Full text
Abstract:

In this work, the graph database Neo4j developed by Neo Technology is presented together with some of it's functionality when it comes to accessing data as a graph. This type of data access brings the possibility to implement common graph algorithms on top of Neo4j. Examples of such algorithms are presented together with their theoretical backgrounds. These are mainly algorithms for finding shortest paths and algorithms for different graph measures such as centrality measures. The implementations that have been made are presented, as well as complexity analysis and the performance measures performed on them. The conclusions include that Neo4j is well suited for these types of implementations.

APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Shijie. "Index-based Graph Querying and Matching in Large Graphs." Cleveland, Ohio : Case Western Reserve University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1263256028.

Full text
Abstract:
Thesis(Ph.D.)--Case Western Reserve University, 2010
Title from PDF (viewed on 2010-04-12) Department of Electrical Engineering and Computer Science (EECS) Includes abstract Includes bibliographical references and appendices Available online via the OhioLINK ETD Center
APA, Harvard, Vancouver, ISO, and other styles
4

McConville, Ryan. "Clustering algorithms for large scale graph data." Thesis, Queen's University Belfast, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.727648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yekollu, Srikar. "Graph Based Regularization of Large Covariance Matrices." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1237243768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Betkaoui, Brahim. "Reconfigurable computing for large-scale graph traversal algorithms." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/25049.

Full text
Abstract:
This thesis proposes a reconfigurable computing approach for supporting parallel processing in large-scale graph traversal algorithms. Our approach is based on a reconfigurable hardware architecture which exploits the capabilities of both FPGAs (Field-Programmable Gate Arrays) and a multi-bank parallel memory subsystem. The proposed methodology to accelerate graph traversal algorithms has been applied to three case studies, revealing that application-specific hardware customisations can benefit performance. A summary of our four contributions is as follows. First, a reconfigurable computing approach to accelerate large-scale graph traversal algorithms. We propose a reconfigurable hardware architecture which decouples computation and communication while keeping multiple memory requests in flight at any given time, taking advantage of the high bandwidth of multi-bank memory subsystems. Second, a demonstration of the effectiveness of our approach through two case studies: the breadth-first search algorithm, and a graphlet counting algorithm from bioinformatics. Both case studies involve graph traversal, but each of them adopts a different graph data representation. Third, a method for using on-chip memory resources in FPGAs to reduce off-chip memory accesses for accelerating graph traversal algorithms, through a case-study of the All-Pairs Shortest-Paths algorithm. This case study has been applied to process human brain network data. Fourth, an evaluation of an approach based on instruction-set extension for FPGA design against many-core GPUs (Graphics Processing Units), based on a set of benchmarks with different memory access characteristics. It is shown that while GPUs excel at streaming applications, the proposed approach can outperform GPUs in applications with poor locality characteristics, such as graph traversal problems.
APA, Harvard, Vancouver, ISO, and other styles
7

Larsson, Carl-Johan. "Movie Recommendation System Using Large Scale Graph-Processing." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Swartz, Eric Allen. "2-arc transitive polygonal graphs of large girth and valency." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243923530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tsalouchidou, Ioanna. "Temporal analysis of large dynamic graphs." Doctoral thesis, Universitat Pompeu Fabra, 2018. http://hdl.handle.net/10803/663755.

Full text
Abstract:
The objective of this thesis is to provide a temporal analysis of the structural and interaction dynamics of large evolving graphs. In this thesis we propose new definitions of important graph metrics in order to include the temporal dimension of the dynamic graphs. We further extend the three important problems of data mining, in the temporal setting. The three problems that we propose are temporal graph summarization, temporal community search and temporal betweenness centrality. Additionally, we propose a distributed version of all our algorithms, that help our techniques to scale up to million vertices. We, finally, evaluate the validity of our methods in terms of efficiency and effectiveness with extensive experimentation on large-scale real-world graphs.
L’objectiu d’aquesta tesi és proporcionar una anàlisi temporal de l'evolució estructural i d’interacció de grans gràfics dinàmics. En aquesta tesi proposem noves definicions de mètriques de gràfiques importants per tal d’incloure la dimensió temporal dels gràfics dinàmics. Ampliem tres problemes importants de mineria de dades en gràfics per a un entorn temporal. Els tres problemes són el resum de gràfics temporals, la cerca temporal de comunitats i la centralitat temporal dels gràfics. A més, proposem una versió distribuïda de tots els nostres algoritmes, que ajuden a les nostres tècniques a escalar fins a milions de vèrtexs. Finalment, avaluem la validesa dels nostres mètodes en termes d’eficiència i eficàcia amb una àmplia experimentació en gràfics del món real a gran escala.
El objetivo de esta tesis es proporcionar un análisis temporal de las dinámicas estructurales y de interacción de grafos masivos dinámicos. Para esto proponemos nuevas definiciones de métricas en grafos importantes para incluir la dimensión temporal de los grafos dinámicos. Además, ampliamos tres problemas importantes de minería de datos en un contexto temporal. Ellos son los resúmenes de grafos temporales, la búsqueda de comunidades en un contexto temporal y la centralidad temporal en grafos. Además, proponemos una versión distribuida de todos nuestros algoritmos, que permiten que nuestras técnicas a escalar hasta millones de vértices. Finalmente, evaluamos la validez de nuestros métodos en términos de eficiencia y efectividad con extensos experimentos en gráfos de gran escala en el mundo real.
APA, Harvard, Vancouver, ISO, and other styles
10

Yuan, Wenjun, and 袁文俊. "Flexgraph: flexible subgraph search in large graphs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B46087539.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bury, Marc [Verfasser], Beate [Akademischer Betreuer] Bollig, and Martin [Gutachter] Sauerhoff. "On graph algorithms for large-scale graphs / Marc Bury. Betreuer: Beate Bollig. Gutachter: Martin Sauerhoff." Dortmund : Universitätsbibliothek Dortmund, 2015. http://d-nb.info/1112468595/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kruzick, Stephen M. "Optimal Graph Filter Design for Large-Scale Random Networks." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1165.

Full text
Abstract:
Graph signal processing analyzes signals supported on the nodes of a network with respect to a shift operator matrix that conforms to the graph structure. For shift-invariant graph filters, which are polynomial functions of the shift matrix, the filter response is defined by the value of the filter polynomial at the shift matrix eigenvalues. Thus, information regarding the spectral decomposition of the shift matrix plays an important role in filter design. However, under stochastic conditions leading to uncertain network structure, the eigenvalues of the shift matrix become random, complicating the filter design task. In such case, empirical distribution functions built from the random matrix eigenvalues may exhibit deterministic limiting behavior that can be exploited for problems on large-scale random networks. Acceleration filters for distributed average consensus dynamics on random networks provide the application covered in this thesis work. The thesis discusses methods from random matrix theory appropriate for analyzing adjacency matrix spectral asymptotics for both directed and undirected random networks, introducing relevant theorems. Network distribution properties that allow computational simplification of these methods are developed, and the methods are applied to important classes of random network distributions. Subsequently, the thesis presents the main contributions, which consist of optimization problems for consensus acceleration filters based on the obtained asymptotic spectral density information. The presented methods cover several cases for the random network distribution, including both undirected and directed networks as well as both constant and switching random networks. These methods also cover two related optimization objectives, asymptotic convergence rate and graph total variation.
APA, Harvard, Vancouver, ISO, and other styles
13

Sun, Jiawen. "The GraphGrind framework : fast graph analytics on large shared-memory systems." Thesis, Queen's University Belfast, 2018. https://pure.qub.ac.uk/portal/en/theses/the-graphgrind-framework-fast-graph-analytics-on-large-sharedmemory-systems(e1eb006f-3a68-4d05-91fe-961d04b42694).html.

Full text
Abstract:
As shared memory systems support terabyte-sized main memory, they provide an opportunity to perform efficient graph analytics on a single machine. Graph analytics is characterised by frequent synchronisation, which is addressed in part by shared memory systems. However, performance is limited by load imbalance and poor memory locality, which originate in the irregular structure of small-world graphs. This dissertation demonstrates how graph partitioning can be used to optimise (i) load balance, (ii) Non-Uniform Memory Access (NUMA) locality and (iii) temporal locality of graph partitioning in shared memory systems. The developed techniques are implemented in GraphGrind, a new shared memory graph analytics framework. At first, this dissertation shows that heuristic edge-balanced partitioning results in an imbalance in the number of vertices per partition. Thus, load imbalance exists between partitions, either for loops iterating over vertices, or for loops iterating over edges. To address this issue, this dissertation introduces a classification of algorithms to distinguish whether they algorithmically benefit from edge-balanced or vertex-balanced partitioning. This classification supports the adaptation of partitions to the characteristics of graph algorithms. Evaluation in GraphGrind, shows that this outperforms state-of-the-art graph analytics frameworks for shared memory including Ligra by 1.46x on average, and Polymer by 1.16x on average, using a variety of graph algorithms and datasets. Secondly, this dissertation demonstrates that increasing the number of graph partitions is effective to improve temporal locality due to smaller working sets. However, the increasing number of partitions results in vertex replication in some graph data structures. This dissertation resorts to using a graph layout that is immune to vertex replication and an automatic graph traversal algorithm that extends the previously established graph traversal heuristics to a 3-way graph layout choice is designed. This new algorithm furthermore depends upon the classification of graph algorithms introduced in the first part of the work. These techniques achieve an average speedup of 1.79x over Ligra and 1.42x over Polymer. Finally, this dissertation presents a graph ordering algorithm to challenge the widely accepted heuristic to balance the number of edges per partition and minimise edge or vertex cut. This algorithm balances the number of edges per partition as well as the number of unique destinations of those edges. It balances edges and vertices for graphs with a power-law degree distribution. Moreover, this dissertation shows that the performance of graph ordering depends upon the characteristics of graph analytics frameworks, such as NUMA-awareness. This graph ordering algorithm achieves an average speedup of 1.87x over Ligra and 1.51x over Polymer.
APA, Harvard, Vancouver, ISO, and other styles
14

Hwang, Heasoo. "Dynamic link-based ranking over large-scale graph-structured data." Diss., [La Jolla] : University of California, San Diego, 2010. http://wwwlib.umi.com/cr/ucsd/fullcit?p3404629.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2010.
Title from first page of PDF file (viewed June 11, 2010). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (leaves 93-97).
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Daryl Hsu Ann. "Toward large-graph comparison measures to understand Internet topology dynamics." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37658.

Full text
Abstract:
Approved for public release; distribution is unlimited
By measuring network changes, we can get a better understanding of a network. Extending this to the Internet, we are able to understand the constantly occuring changes on an international scale. In this research, we propose a measure that conveys the relative magnitude of the change between two networks (i.e., Internet topology). The measure is normalised and intuitively gives an indication of whether the change is small or large. We start off by applying this measure to standard common graphs, as well as random graphs. These graphs were first simulated and the measurements taken; results were then proved theoretically. These corresponded to the simulation results, thus demonstrating correctness. For case studies, we compared actual implemented networks with that which is inferred by probes. This comparison was done to study how accurate the probes were in discovering actual network topology. Finally, we conducted real-world experiments by applying the measurements to certain segments of the Internet. We observed that the measurements indeed do pick up events which significantly influenced structural changes to the Internet.
APA, Harvard, Vancouver, ISO, and other styles
16

Gandy, Clayton A. "Borromean: Preserving Binary Node Attribute Distributions in Large Graph Generations." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7293.

Full text
Abstract:
Real graph datasets are important for many science domains, from understanding epidemics to modeling traffic congestion. To facilitate access to realistic graph datasets, researchers proposed various graph generators typically aimed at representing particular graph properties. While many such graph generators exist, there are few techniques for generating graphs where the nodes have binary attributes. Moreover, generating such graphs in which the distribution of the node attributes preserves real-world characteristics is still an open challenge. This thesis introduces Borromean, a graph generating algorithm that creates synthetic graphs with binary node attributes in which the attributes obey an attribute-specific joint degree distribution. We show experimentally the accuracy of the generated graphs in terms of graph size, distribution of attributes, and distance from the original joint degree distribution. We also designed a parallel version of Borromean in order to generate larger graphs and show its performance. Our experiments show that Borromean can generate graphs of hundreds of thousands of nodes in under 30 minutes, and these graphs preserve the distribution of binary node attributes within 40% on average.
APA, Harvard, Vancouver, ISO, and other styles
17

Das, Sarma Atish. "Algorithms for large graphs." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Sariaydin, Ayse. "Computation And Analysis Of Spectra Of Large Networks With Directed Graphs." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612249/index.pdf.

Full text
Abstract:
Analysis of large networks in biology, science, technology and social systems have become very popular recently. These networks are mathematically represented as graphs. The task is then to extract relevant qualitative information about the empirical networks from the analysis of these graphs. It was found that a graph can be conveniently represented by the spectrum of a suitable difference operator, the normalized graph Laplacian, which underlies diffusions and random walks on graphs. When applied to large networks, this requires computation of the spectrum of large matrices. The normalized Laplacian matrices representing large networks are usually sparse and unstructured. The thesis consists in a systematic evaluation of the available eigenvalue solvers for nonsymmetric large normalized Laplacian matrices describing directed graphs of empirical networks. The methods include several Krylov subspace algorithms like implicitly restarted Arnoldi method, Krylov-Schur method and Jacobi-Davidson methods which are freely available as standard packages written in MATLAB or SLEPc, in the library written C++. The normalized graph Laplacian as employed here is normalized such that its spectrum is confined to the range [0, 2]. The eigenvalue distribution plays an important role in network analysis. The numerical task is then to determine the whole spectrum with appropriate eigenvalue solvers. A comparison of the existing eigenvalue solvers is done with Paley digraphs with known eigenvalues and for citation networks in sizes 400, 1100 and 4500 by computing the residuals.
APA, Harvard, Vancouver, ISO, and other styles
19

Gong, Nan. "Using Map-Reduce for Large Scale Analysis of Graph-Based Data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102822.

Full text
Abstract:
As social networks have gained in popularity, maintaining and processing the social network graph information using graph algorithms has become an essential source for discovering potential features of the graph. The escalating size of the social networks has made it impossible to process the huge graphs on a single ma chine in a “real-time” level of execution. This thesis is looking into representing and distributing graph-based algorithms using Map-Reduce model. Graph-based algorithms are discussed in the beginning. Then, several distributed graph computing infrastructures are reviewed, followed by Map-Reduce introduction and some graph computation toolkits based on Map-Reduce model. By reviewing the background and related work, graph-based algorithms are categorized, and adaptation of graph-based algorithms to Map-Reduce model is discussed. Two particular algorithms, MCL and DBSCAN are chosen to be designed using Map- Reduce model, and implemented using Hadoop. New matrix multiplication method is proposed while designing MCL. The DBSCAN is reformulated into connectivity problem using filter method, and Kingdom Expansion Game is proposed to do fast expansion. Scalability and performance of these new designs are evaluated. Conclusion is made according to the literature study, practical design experience and evaluation data. Some suggestions of graph-based algorithms design using Map-Reduce model are also given in the end.
APA, Harvard, Vancouver, ISO, and other styles
20

Tanaka, Ryokichi. "Large deviation on a covering graph with group of polynomial growth." 京都大学 (Kyoto University), 2012. http://hdl.handle.net/2433/152534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Erdem, Ozge. "Computation And Analysis Of Spectra Of Large Undirected Networks." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612233/index.pdf.

Full text
Abstract:
Many interacting complex systems in biology, in physics, in technology and social systems, can be represented in a form of large networks. These large networks are mathematically represented by graphs. A graph is represented usually by the adjacency or the Laplacian matrix. Important features of the underlying structure and dynamics of them can be extracted from the analysis of the spectrum of the graphs. Spectral analysis of the so called normalized Laplacian of large networks became popular in the recent years. The Laplacian matrices of the empirical networks are in form of unstructured large sparse matrices. The aim of this thesis is the comparison of different eigenvalue solvers for large sparse symmetric matrices which arise from the graph theoretical epresentation of undirected networks. The spectrum of the normalized Laplacian is in the interval [0 2] and the multiplicity of the eigenvalue 1 plays a particularly important role for the network analysis. Moreover, the spectral analysis of protein-protein interaction networks has revealed that these networks have a different distribution type than other model networks such as scale free networks. In this respect, the eigenvalue solvers implementing the well-known implicitly restarted Arnoldi method, Lanczos method, Krylov-Schur and Jacobi Davidson methods are investigated. They exist as MATLAB routines and are included in some freely available packages. The performances of different eigenvalue solvers PEIG, AHBEIGS, IRBLEIGS, EIGIFP, LANEIG, JDQR, JDCG in MATLAB and the library SLEPc in C++ were tested for matrices of size between 100-13000 and are compared in terms of accuracy and computing time. The accuracy of the eigenvalue solvers are validated for the Paley graphs with known eigenvalues and are compared for large empirical networks using the residual plots and spectral density plots are computed.
APA, Harvard, Vancouver, ISO, and other styles
22

Yang, Xintian. "Towards large-scale network analytics." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343680930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Yifan. "Edge partitioning of large graphs." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066346/document.

Full text
Abstract:
Dans cette thèse nous étudions un problème fondamental, le partitionnement de graphe, dans le contexte de la croissance rapide des données, le volume des données continues à augmenter, allant des réseaux sociaux à l'internet des objets. En particulier, afin de vaincre les propriétés intraitables existant dans de nombreuses graphies, par exemple, la distribution des degrés en loi de puissance, nous appliquons un nouveau mode pour coupe de sommet, à la place de la méthode traditionnelle (coupe de bord), ainsi que pour assurer une charge de travail équilibrée et raisonnablement dans le traitement de graphe distribué. En outre, pour réduire le coût de communication inter-partitions, nous proposons une méthode de partition de bord basée sur les blocs, qui peut explorer efficacement les structures graphiques sous-jacentes au niveau local. , afin d'optimiser l'exécution de l'algorithme de graphe. Par cette méthode, le temps d'exécution et des communications généraux peuvent être considérablement réduits par rapport aux approches existantes. Les challenges qui se posent dans les grands graphiques comprennent également leur grande variété. Comme nous le savons, la plupart des applications graphiques au monde réel produisent des ensembles de données hétérogènes, dans lesquels les sommets et / ou les arêtes peuvent avoir des différents types ou des différentes étiquettes. De nombreuses algorithmes de fouille de graphes sont également proposés avec beaucoup d'intérêt pour les attributs d'étiquette. Pour cette raison, notre travail est étendu aux graphes de multicouches en prenant en compte la proximité des arêtes et la distribution des étiquettes lors du processus de partitionnement. En fin de cette thèse, Nous démontré à la ses performances exceptionnelles sur les ensembles de données du monde réel
In this thesis, we mainly focus on a fundamental problem, graph partitioning, in the context of unexpectedly fast growth of data sources, ranging from social networks to internet of things. Particularly, to conquer intractable properties existing in many graphs, e.g. power-law degree distribution, we apply the novel fashion vertex-cut, instead of the traditional edge-cut method, for achieving balanced workload in distributed graph processing. Besides, to reduce the inter-partition communication cost, we present a block-based edge partition method who can efficiently explore the locality underlying graphical structures, to enhance the execution of graph algorithm. With this method, the overhead of both communication and runtime can be decreased greatly, compared to existing approaches. The challenges arising in big graphs also include their high-variety. As we know, most of real life graph applications produce heterogenous datasets, in which the vertices and/or edges are allowed to have different types or labels. A big number of graph mining algorithms are also proposed with much concern for the label attributes. For this reason, our work is extended to multi-layer graphs with taking into account the edges closeness and labels distribution during partitioning process. Its outstanding performance over real-world datasets is demonstrated finally
APA, Harvard, Vancouver, ISO, and other styles
24

Maryokhin, Tymur. "Data dissemination in large-cardinality social graphs." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-48268.

Full text
Abstract:
Near real-time event streams are a key feature in many popular social media applications. These types of applications allow users to selectively follow event streams to receive a curated list of real-time events from various sources. Due to the emphasis on recency, relevance, personalization of content, and the highly variable cardinality of social subgraphs, it is extremely difficult to implement feed following at the scale of major social media applications. This leads to multiple architectural approaches, but no consensus has been reached as to what is considered to be an idiomatic solution. As of today, there are various theoretical approaches exploiting the dynamic nature of social graphs, but not all of them have been applied in practice. In this paper, large-cardinality graphs are placed in the context of existing research to highlight the exceptional data management challenges that are posed for large-scale real-time social media applications. This work outlines the key characteristics of data dissemination in large-cardinality social graphs, and overviews existing research and state-of-the-art approaches in industry, with the goal of stimulating further research in this direction.
APA, Harvard, Vancouver, ISO, and other styles
25

Deri, Joya A. "Graph Signal Processing: Structure and Scalability to Massive Data Sets." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/725.

Full text
Abstract:
Large-scale networks are becoming more prevalent, with applications in healthcare systems, financial networks, social networks, and traffic systems. The detection of normal and abnormal behaviors (signals) in these systems presents a challenging problem. State-of-the-art approaches such as principal component analysis and graph signal processing address this problem using signal projections onto a space determined by an eigendecomposition or singular value decomposition. When a graph is directed, however, applying methods based on the graph Laplacian or singular value decomposition causes information from unidirectional edges to be lost. Here we present a novel formulation and graph signal processing framework that addresses this issue and that is well suited for application to extremely large, directed, sparse networks. In this thesis, we develop and demonstrate a graph Fourier transform for which the spectral components are the Jordan subspaces of the adjacency matrix. In addition to admitting a generalized Parseval’s identity, this transform yields graph equivalence classes that can simplify the computation of the graph Fourier transform over certain networks. Exploration of these equivalence classes provides the intuition for an inexact graph Fourier transform method that dramatically reduces computation time over real-world networks with nontrivial Jordan subspaces. We apply our inexact method to four years of New York City taxi trajectories (61 GB after preprocessing) over the NYC road network (6,400 nodes, 14,000 directed edges). We discuss optimization strategies that reduce the computation time of taxi trajectories from raw data by orders of magnitude: from 3,000 days to less than one day. Our method yields a fine-grained analysis that pinpoints the same locations as the original method while reducing computation time and decreasing energy dispersal among spectral components. This capability to rapidly reduce raw traffic data to meaningful features has important ramifications for city planning and emergency vehicle routing.
APA, Harvard, Vancouver, ISO, and other styles
26

Colmenares, Hugo Armando Gualdron. "Block-based and structure-based techniques for large-scale graph processing and visualization." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-23032016-145752/.

Full text
Abstract:
Data analysis techniques can be useful in decision-making processes, when patterns of interest can indicate trends in specific domains. Such trends might support evaluation, definition of alternatives, or prediction of events. Currently, datasets have increased in size and complexity, posing challenges to modern hardware resources. In the case of large datasets that can be represented as graphs, issues of visualization and scalable processing are of current concern. Distributed frameworks are commonly used to deal with this data, but the deployment and the management of computational clusters can be complex, demanding technical and financial resources that can be prohibitive in several scenarios. Therefore, it is desirable to design efficient techniques for processing and visualization of large scale graphs that optimize hardware resources in a single computational node. In this course of action, we developed a visualization technique named StructMatrix to find interesting insights on real-life graphs. In addition, we proposed a graph processing framework M-Flash that used a novel, bimodal block processing strategy (BBP) to boost computation speed by minimizing I/O cost. Our results show that our visualization technique allows an efficient and interactive exploration of big graphs and our framework MFlash significantly outperformed all state-of-the-art approaches based on secondary memory. Our contributions have been validated in peer-review events demonstrating the potential of our finding in fostering the analytical possibilities related to large-graph data domains.
Técnicas de análise de dados podem ser úteis em processos de tomada de decisão, quando padrões de interesse indicam tendências em domínios específicos. Tais tendências podem auxiliar a avaliação, a definição de alternativas ou a predição de eventos. Atualmente, os conjuntos de dados têm aumentado em tamanho e complexidade, impondo desafios para recursos modernos de hardware. No caso de grandes conjuntos de dados que podem ser representados como grafos, aspectos de visualização e processamento escalável têm despertado interesse. Arcabouços distribuídos são comumente usados para lidar com esses dados, mas a implantação e o gerenciamento de clusters computacionais podem ser complexos, exigindo recursos técnicos e financeiros que podem ser proibitivos em vários cenários. Portanto é desejável conceber técnicas eficazes para o processamento e visualização de grafos em larga escala que otimizam recursos de hardware em um único nó computacional. Desse modo, este trabalho apresenta uma técnica de visualização chamada StructMatrix para identificar relacionamentos estruturais em grafos reais. Adicionalmente, foi proposta uma estratégia de processamento bimodal em blocos, denominada Bimodal Block Processing (BBP), que minimiza o custo de I/O para melhorar o desempenho do processamento. Essa estratégia foi incorporada a um arcabouço de processamento de grafos denominado M-Flash e desenvolvido durante a realização deste trabalho.Foram conduzidos experimentos a fim de avaliar as técnicas propostas. Os resultados mostraram que a técnica de visualização StructMatrix permitiu uma exploração eficiente e interativa de grandes grafos. Além disso, a avaliação do arcabouço M-Flash apresentou ganhos significativos sobre todas as abordagens baseadas em memória secundária do estado da arte. Ambas as contribuições foram validadas em eventos de revisão por pares, demonstrando o potencial analítico deste trabalho em domínios associados a grafos em larga escala.
APA, Harvard, Vancouver, ISO, and other styles
27

Voroshilova, Alexandra. "Comparison study on graph sampling algorithms for interactive visualizations of large-scale networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254656.

Full text
Abstract:
Networks are present in computer science, sociology, biology, and neuroscience as well as in applied fields such as transportation, communication, medical industries. The growing volumes of data collection are pushing scalability and performance requirements on graph algorithms, and at the same time, a need for a deeper understanding of these structures through visualization arises. Network diagrams or graph drawings can facilitate the understanding of data, making intuitive the identification of the largest clusters, the number of connected components, the overall structure, and detecting anomalies, which is not achievable through textual or matrix representations. The aim of this study was to evaluate approaches that would enable visualization of a large scale peer-to-peer video live streaming networks. The visualization of such large scale graphs has technical limitations which can be overcome by filtering important structural data from the networks. In this study, four sampling algorithms for graph reduction were applied to large overlay peer-to-peer network graphs and compared. The four algorithms cover different approaches: selecting links with the highest weight, selecting nodes with the highest cumulative weight, using betweenness centrality metrics, and constructing a focus-based tree. Through the evaluation process, it was discovered that the algorithm based on betweenness centrality approximation offers the best results. Finally, for each of the algorithms in comparison, their resulting sampled graphs were visualized using a forcedirected layout with a 2-step loading approach to depict their effect on the representation of the graphs.
Nätverk återfinns inom datavetenskap, sociologi, biologi och neurovetenskap samt inom tillämpade områden så som transport, kommunikation och inom medicinindustrin. Den växande mängden datainsamling pressar skalbarheten och prestandakraven på grafalgoritmer, samtidigt som det uppstår ett behov av en djupare förståelse av dessa strukturer genom visualisering. Nätverksdiagram eller grafritningar kan underlätta förståelsen av data, identifiera de största grupperna, ett antal anslutna komponenter, visa en övergripande struktur och upptäcka avvikelser, något som inte kan uppnås med texteller matrisrepresentationer. Syftet med denna studie var att utvärdera tillvägagångssätt som kunde möjliggöra visualisering av ett omfattande P2P (peer-to-peer) livestreamingnätverk. Visualiseringen av större grafer har tekniska begränsningar, något som kan lösas genom att samla viktiga strukturella data från nätverken. I den här studien applicerades fyra provtagningsalgoritmer för grafreduktion på stora överlagringar av P2P-nätverksgrafer för att sedan jämföras. De fyra algoritmerna är baserade på val av länkar med högsta vikt, av nodar med högsta kumulativa vikt, betweenness-centralitetsvärden för att konstruera ett fokusbaserat träd som har de längsta vägarna uteslutna. Under utvärderingsprocessen upptäcktes det att algoritmen baserad på betweenness-centralitetstillnärmning visade de bästa resultaten. Dessutom, för varje algoritm i jämförelsen, visualiserades deras slutliga samplade grafer genom att använda en kraftstyrd layout med ett 2-stegs laddningsinfart.
APA, Harvard, Vancouver, ISO, and other styles
28

Maxwell, Sean T. "Efficient Enumeration of all Connected Induced Subgraphs of a Large Undirected Graph." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1386363081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zloch, Matthäus [Verfasser], Stefan [Gutachter] Conrad, and Stefan [Gutachter] Dietze. "Facilitating Knowledge Graph Analysis – Acquisition and Large-Scale Analysis of Topological Graph Measures / Matthäus Zloch ; Gutachter: Stefan Conrad, Stefan Dietze." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2021. http://d-nb.info/1227038534/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sariyuce, Ahmet Erdem. "Fast Algorithms for Large-Scale Network Analytics." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429825578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Keys, Richard Toney. "Large grain data flow graph construction and restructuring utilizing the ECOS Workstation System." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA289632.

Full text
Abstract:
Thesis (Master of Computer Science) Naval Postgraduate School, September 1994.
Thesis advisor(s): Amr Zaky, S. B. Shukla. "September 1994." Title cover: Large ... ECOS Workstation system. Bibliography: p. 77-78. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
32

Gkantsidis, Christos. "Algorithmic performance of large-scale distributed networks a spectral method approach /." Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-12062005-141254/.

Full text
Abstract:
Thesis (Ph. D.)--Computing, Georgia Institute of Technology, 2006.
Mihail, Milena, Committee Chair ; Ammar, Mostafa, Committee Member ; Dovrolis, Constantinos, Committee Member ; Faloutsos, Michalis, Committee Member ; Zegura, Ellen, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
33

Giese, Holger, and Stephan Hildebrandt. "Efficient model synchronization of large-scale models." Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2928/.

Full text
Abstract:
Model-driven software development requires techniques to consistently propagate modifications between different related models to realize its full potential. For large-scale models, efficiency is essential in this respect. In this paper, we present an improved model synchronization algorithm based on triple graph grammars that is highly efficient and, therefore, can also synchronize large-scale models sufficiently fast. We can show, that the overall algorithm has optimal complexity if it is dominating the rule matching and further present extensive measurements that show the efficiency of the presented model transformation and synchronization technique.
Die Model-getriebene Softwareentwicklung benötigt Techniken zur Übertragung von Änderungen zwischen verschiedenen zusammenhängenden Modellen, um vollständig nutzbar zu sein. Bei großen Modellen spielt hier die Effizienz eine entscheidende Rolle. In diesem Bericht stellen wir einen verbesserten Modellsynchronisationsalgorithmus vor, der auf Tripel-Graph-Grammatiken basiert. Dieser arbeitet sehr effizient und kann auch sehr große Modelle schnell synchronisieren. Wir können zeigen, dass der Gesamtalgortihmus eine optimale Komplexität aufweist, sofern er die Ausführung dominiert. Die Effizient des Algorithmus' wird durch einige Benchmarkergebnisse belegt.
APA, Harvard, Vancouver, ISO, and other styles
34

Vadamalai, Subramanian Viswanathan. "Lost In The Crowd: Are Large Social Graphs Inherently Indistinguishable?" Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6972.

Full text
Abstract:
Real social graphs datasets are fundamental to understanding a variety of phenomena, such as epidemics, crowd management and political uprisings, yet releasing digital recordings of such datasets exposes the participants to privacy violations. A safer approach to making real social network topologies available is to anonymize them by modifying the graph structure enough as to decouple the node identity from its social ties, yet preserving the graph characteristics in aggregate. At scale, this approach comes with a significant challenge in computational complexity. This thesis questions the need to structurally anonymize very large graphs. Intuitively, the larger the graph, the easier for an individual to be “lost in the crowd”. On the other hand, at scale new topological structures may emerge, and those can expose individual nodes in ways that smaller structures do not. To answer this problem, this work introduces a set of metrics for measuring the indistinguishability of nodes in large-scale social networks independent of attack models and shows how different graphs have different levels of inherent indistinguishability of nodes. Moreover, we show that when varying the size of a graph, the inherent node indistinguishability decreases with the size of the graph. In other words, the larger a graph of a graph structure, the higher the indistinguishability of its nodes.
APA, Harvard, Vancouver, ISO, and other styles
35

張永泰 and Wing-tai Cheung. "Geometric programming and signal flow graph assisted design of interconnect and analog circuits." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39558526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Melo, Gerard de [Verfasser], and Gerhard [Akademischer Betreuer] Weikum. "Graph-based methods for large-scale multilingual knowledge integration / Gerard de Melo. Betreuer: Gerhard Weikum." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2011. http://d-nb.info/1051324246/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Diomedi, Riccardo. "Evaluation and Optimization of Execution Plans for Fixpoint Iterative Algorithms in Large-Scale Graph Processing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190102.

Full text
Abstract:
In large-scale graph processing, a fixpoint iterative algorithm is a set of operations where iterative computation is the core. The aim, in fact, is to perform repetitive operations refining a set of parameter values, until a fixed point is reached. To describe fixpoint iterative algorithms, template execution plans have been developed. In an iterative algorithm an execution plan is a set of dataflow operators describing the way in which parameters have to be processed in order to implement such algorithms. In the Bulk iterative execution plan all the parameters are recomputed for each iteration. Dependency plan calculates dependencies among vertices of a graph in order to iteratively update fewer parameters during each step. To do that it performs an extra pre-processing phase. This phase, however, is a demanding task especially in the first iterations where the amount of data is considerable. We describe two methods in order to address the preprocessing step of the Dependency plan. The first one exploits an optimizer which allows switching the plan during runtime, based on a cost model. We develop three cost models taking into account various features characterising the plan cost. The second method introduces optimizations that bypass the pre-processing phase. All the implementations are based on caching parameters values and so they are memory greedy. The experiments show that, while alternative implementation of Dependency plan does not give expected results in terms of per-iteration time, cost models are able to refine the existing basic cost model increasing accuracy. Furthermore, we demonstrate that switching plan during runtime is a successful strategy to decrease the whole execution time and improve performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Gosnell, Shannon Leah. "A Characterization of Large (t,r)-Regular Graphs." Digital Commons @ East Tennessee State University, 2000. https://dc.etsu.edu/etd/7.

Full text
Abstract:
A graph G is a (t,r)-regular graph if every collection of t independent vertices is collectively adjacent to exactly r vertices. In this thesis, we will present a complete characterization of (t,r)-regular graphs of order n if n is sufficiently large. Furthermore, we will show that all graphs of this type are isomorphic to Ks + mKp where t(p - 1) + s = r.
APA, Harvard, Vancouver, ISO, and other styles
39

Sosnowski, Scott T. "Approximate Action Selection For Large, Coordinating, Multiagent Systems." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1459468867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ippolito, Corey A. "Self-Assembling Decentralized Control Constructs for Large-Scale Variably-Interconnected Systems." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/740.

Full text
Abstract:
There is an emerging need to develop new techniques for control system design that better address the challenges posed by modern large-scale cyber-physical systems. These systems are often massive networks of interconnected and interoperating subsystems that fuse physical processes, embedded computation, automation technologies, and communication. The resulting problems are dimensionally large, exhibit significant time-varying structural variations during operation, and feature complex dynamics, constraints and objectives across local and global-system scales. These properties are difficult to address using traditional control theoretic methods without substantial loss of performance and robustness. To overcome these limitations, this dissertation presents new concepts and methods for control of modern large-scale variably-structured systems through self-assembling and self-configuring control constructs that allow for fundamental restructuring of the control system’s topology in response to the current system structure. We present the System Component Graph (SCG) formulation as a mathematical framework that generalizes and extends directed graph methods from decentralized control. We present algorithms, methods, and metrics for real-time decentralization and control-structure optimization, utilizing the inclusion principle for addressing interconnected overlapping dynamics and optimal linear-quadratic (LQ) methods for local decentralized subsystem control. Global system control and performance is achieved through a centralized planner that provides continuous real-time optimized trajectories as guidance command inputs to each subsystem. We present the method of Random Subcomplement Trees (RST) for pseudo-optimal real-time trajectory planning of large-scale systems which formalizes and extends the method of rapidly-exploring random trees in a control optimization framework. The RST method defines transformations from the higher-dimension state space into an intermediate lower-dimensional search space, where optimal transitions between subspace states are defined. In the context of this approach, the resulting decentralized topology found within the SCG framework provides the RST subspace definition and requisite transformations, and optimal transitions in the search space are found through forward evaluation of the closed-loop decentralized subsystem dynamics. The methods developed in this thesis are applied to a set of real-world problems spanning various domains and demonstrate the application of these methods from first-principle modeling through control system analysis, design, implementation, and evaluation in experimental tests and simulation.
APA, Harvard, Vancouver, ISO, and other styles
41

Saad, Kristen M. "Bulk Synchronous Parallel Implementation of Percolation Centrality for Large Scale Graphs." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case149619082195966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ahmed, Bako. "Limit Shapes for qVolume Tilings of a Large Hexagon." Thesis, KTH, Matematik (Avd.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280758.

Full text
Abstract:
Lozenges are polygons constructed by gluing two equilateral triangles along an edge. We can fit lozenge pieces together to form larger polygons and given an appropriate polygon we can tile it with lozenges. Lozenge tilings of the semi-regular hexagon with sides A,B,C can be viewed as the 2D picture of a stack of cubes in a A x B x C box. In this project we investigate the typical shape of a tiling as the sides A,B,C of the box grow uniformly to infinity and we consider two cases: The uniform case where all tilings occur with equal probability and the q^Volume case where the probability of a tiling is proportional to the volume taken up by the corresponding stack of cubes. To investigate lozenge tilings we transform it into a question on families of non-intersecting paths on a corresponding graph representing the hexagon. Using the Lindström–Gessel–Viennot theorem we can define the probability of a non-intersecting path crossing a particular point in the hexagon both for the uniform and the $q$-Volume case. In each case this probability function is connected to either the Hahn or the $q$-Hahn orthogonal polynomials. The orthogonal polynomials depend on the sides of the hexagon and so we consider the asymptotic behaviour of the polynomials as the sides grow to infinity using a result due to Kuijlaars and Van Assche. This determines the density of non-intersecting paths through every point in the hexagon, which we calculate, and a ``Arctic curve" result which shows that the six corners of the hexagon are (with probability one) tiled with just one type of lozenge.
"Lozenger" är polygoner konstruerade genom att limma två liksidiga trianglar längs en kant. Vi kan montera lozengstycken ihop för att bilda större polygoner och med en lämplig polygon kan vi lozengplatta den. Lozengplattor av den semi-liksidiga hexagonen med sidorna A, B, C kan ses som 2D-bilden av en stapel kuber i en A x B x C-box. I det här projektet undersöker vi den typiska formen på en platta när sidorna A, B, C på rutan växer till oändlighet och vi tar an två fall: Det likformiga fallet där alla plattor sker med samma sannolikhet och q ^ Volymfallet då sannolikheten för en platta är proportionell mot volymen som tas upp av motsvarande kubstapel. För att undersöka plattor förvandlar vi det till en fråga om samlingar av icke-korsande vägar på en motsvarande graf som representerar hexagonen. Med hjälp av satsen Lindström – Gessel – Viennot kan vi definiera sannolikheten för att en icke-korsande väg går genom en viss punkt i hexagonen både för det enhetliga och $ q $ -volymfallet. I båda fallen är dessa sannolikhetsfunktioner relaterade till Hahn eller $ q $ -Hahn ortogonala polynomer. Dessa ortogonala polynom beror på hexagonens sidor så vi betraktar polynomens asymptotiska beteende när sidorna växer till oändlighet genom ett resultat från Kuijlaars och Van Assche. Detta bestämmer densiteten för de icke-korsande vägarna genom varje punkt i det hexagon vi beräknar. Detta bestämmer också också en '' arktisk kurva '' som visar att hexagonens sex hörn är (med sannolikhet ett) plattade med bara en typ av lozeng.
APA, Harvard, Vancouver, ISO, and other styles
43

Gkantsidis, Christos. "Algorithmic performance of large-scale distributed networks: A spectral method approach." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/10420.

Full text
Abstract:
Complex networks like the Internet, peer-to-peer systems, and emerging sensor and ad-hoc networks are large distributed decentralized communication systems arising repeatedly in today's technology. In such networks it is critical to characterize network performance as the size of the network scales. The focus of my work is to relate basic network performance metrics to structural characteristics of underlying network topologies, and to develop protocols that reinforce and exploit desired structural characteristics. For the case of the Internet at the Autonomous System level, we relate the graph theoretic notions of conductance and spectrum to network clustering and network congestion. In particular, we show how spectral analysis can identify clusters, and how the presence of clusters affects congestion. This is important for network prediction and network simulation. For the case of peer-to-peer networks we relate conductance and spectral gap to the fundamental questions of searching and topology maintenance. We propose new protocols for maintaining peer-to-peer networks with good conductance and low network overhead. We compare the performance of the traditional method of search by flooding to searching by random walks. We isolate cases of practical interest, such as clustered and dynamic network topologies, where the latter have superior performance. The improvement in the performance can be directly quantified in terms of the conductance of the underlying topology. We introduce further hybrid search schemes, of which flooding and random walks are special instances, which aim to improve the performance of searching by using locally maintained information about the network topology.
APA, Harvard, Vancouver, ISO, and other styles
44

Swank, David P. "Large grain data-flow graph restructuring for EMSP signal processing benchmarks on the ECOS workstation system." Thesis, Monterey, California. Naval Postgraduate School, 1993. http://hdl.handle.net/10945/26609.

Full text
Abstract:
This thesis documents a procedure for implementing the Revolving Cylinder scheduling algorithm for parallel programs on the ECOS Workstation System (EWS), designed specifically by AT&T for simulation of the Enhanced Modular Signal Processor (EMSP) currently in use by the United States Navy. The Revolving Cylinder (RC) algorithm provides a methodology for forcing First Come First Served (FCFS) schedulers to follow a more systematic utilization of available resources. The methods of implementation used take advantage of the Graphical Editor (gred) to insert additional data dependencies into the program structure. The thesis utilizes applications written in Signal Processing Graph Notation (SPGN), viz., a simple correlator function and the active subroutine of the U.S. Navy Sonobuoy benchmark. Results for standard FCFS scheduling and RC modified scheduling are presented for both. Special attention is paid throughout the thesis to enhancement of manufacturer supplied documentation with regard to implementation of the non-standard RC structures. Impact of the algorithm on throughput and latency is discussed, as well as performance determination using the tools provided with the ECOS Workstation System
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Ran. "Effective Graph-Based Content--Based Image Retrieval Systems for Large-Scale and Small-Scale Image Databases." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/2123.

Full text
Abstract:
This dissertation proposes two novel manifold graph-based ranking systems for Content-Based Image Retrieval (CBIR). The two proposed systems exploit the synergism between relevance feedback-based transductive short-term learning and semantic feature-based long-term learning to improve retrieval performance. Proposed systems first apply the active learning mechanism to construct users' relevance feedback log and extract high-level semantic features for each image. These systems then create manifold graphs by incorporating both the low-level visual similarity and the high-level semantic similarity to achieve more meaningful structures for the image space. Finally, asymmetric relevance vectors are created to propagate relevance scores of labeled images to unlabeled images via manifold graphs. The extensive experimental results demonstrate two proposed systems outperform the other state-of-the-art CBIR systems in the context of both correct and erroneous users' feedback.
APA, Harvard, Vancouver, ISO, and other styles
46

Sabbir, Tarikul Alam Khan. "Topology sensitive algorithms for large scale uncapacitated covering problem." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2011, 2011. http://hdl.handle.net/10133/3235.

Full text
Abstract:
Solving NP-hard facility location problems in wireless network planning is a common scenario. In our research, we study the Covering problem, a well known facility location problem with applications in wireless network deployment. We focus on networks with a sparse structure. First, we analyzed two heuristics of building Tree Decomposition based on vertex separator and perfect elimination order. We extended the vertex separator heuristic to improve its time performance. Second, we propose a dynamic programming algorithm based on the Tree Decomposition to solve the Covering problem optimally on the network. We developed several heuristic techniques to speed up the algorithm. Experiment results show that one variant of the dynamic programming algorithm surpasses the performance of the state of the art mathematical optimization commercial software on several occasions.
ix, 89 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Ning. "Shortest Path Queries in Very Large Spatial Databases." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/1156.

Full text
Abstract:
Finding the shortest paths in a graph has been studied for a long time, and there are many main memory based algorithms dealing with this problem. Among these, Dijkstra's shortest path algorithm is one of the most commonly used efficient algorithms to the non-negative graphs. Even more efficient algorithms have been developed recently for graphs with particular properties such as the weights of edges fall into a range of integer. All of the mentioned algorithms require the graph totally reside in the main memory. Howevery, for very large graphs, such as the digital maps managed by Geographic Information Systems (GIS), the requirement cannot be satisfied in most cases, so the algorithms mentioned above are not appropriate. My objective in this thesis is to design and evaluate the performance of external memory (disk-based) shortest path algorithms and data structures to solve the shortest path problem in very large digital maps. In particular the following questions are studied:What have other researchers done on the shortest path queries in very large digital maps?What could be improved on the previous works? How efficient are our new shortest paths algorithms on the digital maps, and what factors affect the efficiency? What can be done based on the algorithm? In this thesis, we give a disk-based Dijkstra's-like algorithm to answer shortest path queries based on pre-processing information. Experiments based on our Java implementation are given to show what factors affect the running time of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
48

Cash, Heather. "A Library of Functions in C++ for Building and Manipulating Large Graphs." Honors in the Major Thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1213.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
49

Mantrach, Amin. "Novel measures on directed graphs and applications to large-scale within-network classification." Doctoral thesis, Universite Libre de Bruxelles, 2010. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210033.

Full text
Abstract:
Ces dernières années, les réseaux sont devenus une source importante d’informations dans différents domaines aussi variés que les sciences sociales, la physique ou les mathématiques. De plus, la taille de ces réseaux n’a cessé de grandir de manière conséquente. Ce constat a vu émerger de nouveaux défis, comme le besoin de mesures précises et intuitives pour caractériser et analyser ces réseaux de grandes tailles en un temps raisonnable.

La première partie de cette thèse introduit une nouvelle mesure de similarité entre deux noeuds d’un réseau dirigé et pondéré :la covariance “sum-over-paths”. Celle-ci a une interprétation claire et précise :en dénombrant tous les chemins possibles deux noeuds sont considérés comme fortement corrélés s’ils apparaissent souvent sur un même chemin – de préférence court. Cette mesure dépend d’une distribution de probabilités, définie sur l’ensemble infini dénombrable des chemins dans le graphe, obtenue en minimisant l'espérance du coût total entre toutes les paires de noeuds du graphe sachant que l'entropie relative totale injectée dans le réseau est fixée à priori. Le paramètre d’entropie permet de biaiser la distribution de probabilité sur un large spectre :allant de marches aléatoires naturelles où tous les chemins sont équiprobables à des marches biaisées en faveur des plus courts chemins. Cette mesure est alors appliquée à des problèmes de classification semi-supervisée sur des réseaux de taille moyennes et comparée à l’état de l’art.

La seconde partie de la thèse introduit trois nouveaux algorithmes de classification de noeuds en sein d’un large réseau dont les noeuds sont partiellement étiquetés. Ces algorithmes ont un temps de calcul linéaire en le nombre de noeuds, de classes et d’itérations, et peuvent dés lors être appliqués sur de larges réseaux. Ceux-ci ont obtenus des résultats compétitifs en comparaison à l’état de l’art sur le large réseaux de citations de brevets américains et sur huit autres jeux de données. De plus, durant la thèse, nous avons collecté un nouveau jeu de données, déjà mentionné :le réseau de citations de brevets américains. Ce jeu de données est maintenant disponible pour la communauté pour la réalisation de tests comparatifs.

La partie finale de cette thèse concerne la combinaison d’un graphe de citations avec les informations présentes sur ses noeuds. De manière empirique, nous avons montré que des données basées sur des citations fournissent de meilleurs résultats de classification que des données basées sur des contenus textuels. Toujours de manière empirique, nous avons également montré que combiner les différentes sources d’informations (contenu et citations) doit être considéré lors d’une tâche de classification de textes. Par exemple, lorsqu’il s’agit de catégoriser des articles de revues, s’aider d’un graphe de citations extrait au préalable peut améliorer considérablement les performances. Par contre, dans un autre contexte, quand il s’agit de directement classer les noeuds du réseau de citations, s’aider des informations présentes sur les noeuds n’améliora pas nécessairement les performances.

La théorie, les algorithmes et les applications présentés dans cette thèse fournissent des perspectives intéressantes dans différents domaines.

In recent years, networks have become a major data source in various fields ranging from social sciences to mathematical and physical sciences. Moreover, the size of available networks has grow substantially as well. This has brought with it a number of new challenges, like the need for precise and intuitive measures to characterize and analyze large scale networks in a reasonable time.

The first part of this thesis introduces a novel measure between two nodes of a weighted directed graph: The sum-over-paths covariance. It has a clear and intuitive interpretation: two nodes are considered as highly correlated if they often co-occur on the same -- preferably short -- paths. This measure depends on a probability distribution over the (usually infinite) countable set of paths through the graph which is obtained by minimizing the total expected cost between all pairs of nodes while fixing the total relative entropy spread in the graph. The entropy parameter allows to bias the probability distribution over a wide spectrum: going from natural random walks (where all paths are equiprobable) to walks biased towards shortest-paths. This measure is then applied to semi-supervised classification problems on medium-size networks and compared to state-of-the-art techniques.

The second part introduces three novel algorithms for within-network classification in large-scale networks, i.e. classification of nodes in partially labeled graphs. The algorithms have a linear computing time in the number of edges, classes and steps and hence can be applied to large scale networks. They obtained competitive results in comparison to state-of-the-art technics on the large scale U.S.~patents citation network and on eight other data sets. Furthermore, during the thesis, we collected a novel benchmark data set: the U.S.~patents citation network. This data set is now available to the community for benchmarks purposes.

The final part of the thesis concerns the combination of a citation graph with information on its nodes. We show that citation-based data provide better results for classification than content-based data. We also show empirically that combining both sources of information (content-based and citation-based) should be considered when facing a text categorization problem. For instance, while classifying journal papers, considering to extract an external citation graph may considerably boost the performance. However, in another context, when we have to directly classify the network citation nodes, then the help of features on nodes will not improve the results.

The theory, algorithms and applications presented in this thesis provide interesting perspectives in various fields.


Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
50

Pelayo, Melero Ignacio Manuel. "Some contributions from graph theory to the design and study of large and fault-tolerant interconnection networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2000. http://hdl.handle.net/10803/7020.

Full text
Abstract:
Esta tesis trata diversas cuestiones relacionadas con el diseño y estudio de redes de interconexión densas y fiables. Concretamente en ella se han incluido cuatro grupos de contribuciones. En primer lugar se presenta una nueva lista de grafos densos de diámetro seis. Cada uno de estos grafos se ha diseñado mediante un tipo particular de composición a partir de los
grafos biapartitos de Moore de diamétro seis con una familia de grafos completos. Se presentan nuevos grafos obtenidos hasta grado máximo 14, aunque el método utilizado permite teóricamente producir grafos densos de diámetro seis y grado máximo una potencia de un número primo menos uno. A continuación, se lleva a cabo una reformulación de los grafos compuestos
generalizados a partir de la cual se aborda el problema de su 1-vértice vulnerabilidad del diamétro, obteniéndose que, en general, esta es quasi-óptima.

En tercer lugar, se lleva a cabo un análisis sobre conectividad y superconectavididad bajo condiciones sobre el diámetro y sobre el orden de la familia de p-ciclos generalizados, utilizando la terminología de Hamidoune. Finalmente, exponen una serie de resultados sobre conectividad, superconectividad y extraconectividad bajo condiciones sobre el diámetro, a partir de la introducción de un único método de demostración constructiva denominado algoritmo de retirada progresiva y de una nueva familia de parámetros definida partiendo de la del parámetro 1 de Fiol y Fábrega.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography