Dissertations / Theses on the topic 'Partitioning and placement algorithms'

To see the other types of publications on this topic, follow the link: Partitioning and placement algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Partitioning and placement algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stan, Oana. "Placement of tasks under uncertainty on massively multicore architectures." Thesis, Compiègne, 2013. http://www.theses.fr/2013COMP2116/document.

Full text
Abstract:
Ce travail de thèse de doctorat est dédié à l'étude de problèmes d'optimisation combinatoire du domaine des architectures massivement parallèles avec la prise en compte des données incertaines tels que les temps d'exécution. On s'intéresse aux programmes sous contraintes probabilistes dont l'objectif est de trouver la meilleure solution qui soit réalisable avec un niveau de probabilité minimal garanti. Une analyse quantitative des données incertaines à traiter (variables aléatoires dépendantes, multimodales, multidimensionnelles, difficiles à caractériser avec des lois de distribution usuelles), nous a conduit à concevoir une méthode qui est non paramétrique, intitulée "approche binomiale robuste". Elle est valable quelle que soit la loi jointe et s'appuie sur l'optimisation robuste et sur des tests d'hypothèse statistique. On propose ensuite une méthodologie pour adapter des algorithmes de résolution de type approchée pour résoudre des problèmes stochastiques en intégrant l'approche binomiale robuste afin de vérifier la réalisabilité d'une solution. La pertinence pratique de notre démarche est enfin validée à travers deux problèmes issus de la compilation des applications de type flot de données pour les architectures manycore. Le premier problème traite du partitionnement stochastique de réseaux de processus sur un ensemble fixé de nœuds, en prenant en compte la charge de chaque nœud et les incertitudes affectant les poids des processus. Afin de trouver des solutions robustes, un algorithme par construction progressive à démarrages multiples a été proposé ce qui a permis d'évaluer le coût des solution et le gain en robustesse par rapport aux solutions déterministes du même problème. Le deuxième problème consiste à traiter de manière globale le placement et le routage des applications de type flot de données sur une architecture clustérisée. L'objectif est de placer les processus sur les clusters en s'assurant de la réalisabilité du routage des communications entre les tâches. Une heuristique de type GRASP a été conçue pour le cas déterministe, puis adaptée au cas stochastique clustérisé
This PhD thesis is devoted to the study of combinatorial optimization problems related to massively parallel embedded architectures when taking into account uncertain data (e.g. execution time). Our focus is on chance constrained programs with the objective of finding the best solution which is feasible with a preset probability guarantee. A qualitative analysis of the uncertain data we have to treat (dependent random variables, multimodal, multidimensional, difficult to characterize through classical distributions) has lead us to design a non parametric method, the so-called "robust binomial approach", valid whatever the joint distribution and which is based on robust optimization and statistical hypothesis testing. We also propose a methodology for adapting approximate algorithms for solving stochastic problems by integrating the robust binomial approach when verifying for solution feasibility. The paractical relevance of our approach is validated through two problems arising in the compilation of dataflow application for manycore platforms. The first problem treats the stochastic partitioning of networks of processes on a fixed set of nodes, by taking into account the load of each node and the uncertainty affecting the weight of the processes. For finding stochastic solutions, a semi-greedy iterative algorithm has been proposed which allowed measuring the robustness and cost of the solutions with regard to those for the deterministic version of the problem. The second problem consists in studying the global placement and routing of dataflow applications on a clusterized architecture. The purpose being to place the processes on clusters such that it exists a feasible routing, a GRASP heuristic has been conceived first for the deterministic case and afterwards extended for the chance constrained variant of the problem
APA, Harvard, Vancouver, ISO, and other styles
2

URGESE, GIANVITO. "Computational Methods for Bioinformatics Analysis and Neuromorphic Computing." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2646486.

Full text
Abstract:
The latest biological discoveries and the exponential growth of more and more sophisticated biotechnologies led in the current century to a revolution that totally reshaped the concept of genetic study. This revolution, which began in the last decades, is still continuing thanks to the introduction of new technologies capable of producing a huge amount of biological data in a relatively short time and at a very low price with respect to some decades ago. These new technologies are known as Next Generation Sequencing (NGS). These platforms perform massively parallel sequencing of both RNA and DNA molecules, thus allowing to retrieve the nucleic acid sequence of millions of fragments of DNA or RNA in a single machine run. The introduction of such technologies rapidly changed the landscape of genetic research, providing the ability to answer questions with heretofore unimaginable accuracy and speed. Moreover, the advent of NGS with the consequent need for ad-hoc strategies for data storage, sharing, and analysis is transforming genetics in a big data research field. Indeed, the large amount of data coming from sequencing technologies and the complexity of biological processes call for novel computational tools (Bioinformatics tools) and informatics resources to exploit this kind of information and gain novel insights into human beings, living organisms, and pathologies mechanisms. At the same time, a new scientific discipline called Neuromorphic Computing has been established to develop SW/HW systems having brain-specific features, such as high degree of parallelism and low power consumption. These platforms are usually employed to support the simulation of the nervous system, thus allowing the study of the mechanisms at the basis of the brain functioning. In this scenario, my research program focused on the development of optimized HW/SW algorithms and tools to process the biological information from Bioinformatics and Neuromorphic studies. The main objective of the methodologies proposed in this thesis consisted in achieving a high level of sensitivity and specificity in data analysis while minimizing the computational time. To reach these milestones, then, some bottlenecks identified in the state-of-the-art tools have been solved through a careful design of three new optimised algorithms. The work that led to this thesis is part of three collaborative projects. Two concerning the design of Bioinformatics sequence alignment algorithms and one aimed at optimizing the resources usage of a Neuromorphic platform. In the next paragraphs, the projects are briefly introduced. Dynamic Gap Selector Project This project concerned the design and implementation of a new gap model implemented in the dynamic programming sequence alignment algorithms. Smith-Waterman (S-W) and Needleman-Wunsch (N-W) are widespread methods to perform Local and Global alignments of biological sequences such as proteins, DNA and RNA molecules that are represented such as sequences of letters. Both the algorithms make use of scoring procedures to evaluate matches and errors that can be encountered during the sequence alignment process. These scoring strategies are designed to consider insertions and deletions through the identification of gaps in the aligned sequences. The Affine gap model is considered the most accurate model for the alignment of biomolecules. However, its application to S-W and N-W algorithms is quite expensive both in terms of computational time as well as in terms of memory requirements when compared to other less demanding models as the Linear gap one. In order to overcome these drawbacks, an optimised version of the Affine gap model called Dynamic Gap Selector (DGS) has been developed. The alignment scores computed using DGS are very similar to those computed using the gold standard Affine gap model. However, the implementation of this novel gap model during the S-W and N-W alignment procedures leads to the reduction of the memory requirements by a factor of 3. Moreover, the DGS model application accounts for a reduction by a factor of 2 in the number of operations required with respect to the standard Affine gap model. isomiR-SEA Project One of the most attractive research fields that is currently investigated by several interdisciplinary research teams is the study of small and medium RNA sequences with regulatory functions on the production of proteins. These RNA molecules are respectively called microRNAs (miRNAs) and long non-coding RNAs (lncRNAs). In the second project, an alignment algorithm specific for miRNAs detection and characterization have been designed and implemented. miRNAs are a class of short RNAs (18-25 bases) that play essential roles in a variety of cellular processes such as development, metabolism, regulation of immunological response and tumor genesis. Several tools have been developed in the last years to align and analyse the huge amount of data coming from the sequencing of short RNA molecules. However, these tools still lack accuracy and completeness because they use general alignment procedures that do not take into account the structural characteristics of miRNA molecules. Moreover, they are not able to detect specific miRNA variants, called isomiRs, that have recently been found to be relevant for miRNA targets regulation. To overcome these limitations, a miRNA-based alignment algorithm has been designed and developed. The isomiR-SEA algorithm is specifically tailored to detect different miRNAs variants (isomiRs) in the RNA-Seq data and to provide users with a detailed picture of the isomiRs spectrum characterizing the sample under investigation. The accuracy proper of the implemented alignment policy is reflected in the precise miRNAs and isomiRs quantification, and in the detailed profiling of miRNAtarget mRNA interaction sites. This information, hidden in raw miRNA sequencing data, can be very useful to properly characterize miRNAs and to adopt them as reliable biomarkers able to describe multifactorial pathologies such as cancer. SNN Partitioning and Placement Project In the Neuromorphic Computing field, SpiNNaker is one of the state-of-the-art massively parallel neuromorphic platform. It is designed to simulate Spiking Neural Networks (SNN) but it is characterized by several bottlenecks in the neuron partitioning and placement phases executed during the simulation configuration. In this activity, related to the European Flagship project Human Brain Project, a top-down methodology has been developed to improve the scalability and reliability of SNN simulations on massively many-core and densely interconnected platforms. In this context, SNNs mimic the brain activity by emulating spikes sent among neurons populations. Many-core platforms are emerging computing resources to achieve real-time SNNs simulations. Neurons are mapped to parallel cores and spikes are sent in the form of packets over the on-chip and off-chip network. However, due to the heterogeneity and complexity of neuron populations activity, achieving an efficient exploitation of platforms resources is a challenge, often impacting simulation reliability and limiting the biological network size. To address this challenge, the proposed methodology makes use of customized SNN configurations capable of extracting detailed profiling information about network usage of on-chip and off-chip resources. Thus, allowing to recognize the bottlenecks in the spike propagation system. These bottlenecks have been then considered during the SNN Partitioning and Placement of a graph describing the SNN interconnection on chips and cores available on the SpiNNaker board. The advantages of the proposed SNN Partitioning and Placement applied to the SpiNNaker has been evaluated in terms of traffic reduction and consequent simulation reliability. The results demonstrate that it is possible to consistently reduce packet traffic and improve simulation reliability by means of an effective neuron placement.
APA, Harvard, Vancouver, ISO, and other styles
3

Trifunovic, Aleksandar. "Parallel algorithms for hypergraph partitioning." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aslan, Burak Galip Püskülcü Halis. "Heuristic container placement algorithms/." [s.l.]: [s.n.], 2003. http://library.iyte.edu.tr/tezler/master/bilgisayaryazilimi/T000268.rar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bahoshy, Nimatallah M. "Parallelization of algorithms by explicit partitioning." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/27004.

Full text
Abstract:
In order to utilize parallel computers, four approaches, broadly speaking, to the provision of parallel software have been followed: (1) automatic production of parallel code by parallelizing—compilers which act on sequential programs written in existing languages; (2) "add on" features to existing languages that enable the programmer to make use of the parallel computer—these are specific to each machine; (3) full-blown parallel languages—these could be completely new languages, but usually they are derived from existing languages; (4) the provision of tools to aid the programmer in the detection of inherent parallelism in a given algorithm and in the design and implementation of parallel programs.
APA, Harvard, Vancouver, ISO, and other styles
6

Zanetti, Luca. "Algorithms for partitioning well-clustered graphs." Thesis, University of Bristol, 2018. http://hdl.handle.net/1983/e6ba8929-6488-4277-b91b-4f4f7eda2b26.

Full text
Abstract:
Graphs occurring in the real world usually exhibit a high level of order and organisation: higher concentration of edges within the same group of vertices, and lower concentration among different groups. A common way to analyse these graphs is to partition the vertex set of a graph into clusters according to some connectivity measure. Graph clustering has been widely applied to many fields of computer science, from machine learning to bioinformatics and social network analysis. The focus of this thesis is to design and analyse algorithms for partitioning graphs presenting a strong cluster-structure, which we call well-clustered. We first study the spectral properties of the Laplacian matrix of such graphs, and prove a structure theorem that relates the eigenvectors corresponding to the smallest eigenvalues of the Laplacian matrix of a graph to the structure of its clusters. We then harness this theorem to analyse Spectral Clustering, arguably the most popular graph clustering algorithm. We give for the first time approximation guarantees on the number of misclassified vertices by Spectral Clustering when applied to well-clustered graphs. Since Spectral Clustering needs to compute as many eigenvectors of the Laplacian matrix as the number of clusters in the graph, its performance deteriorates as this number grows. We present an algorithm that overcomes this issue without compromising its accuracy. This algorithm runs in time nearly linear in the number of the edges and independently of the number of clusters in the input graph. Finally, we tackle the problem of partitioning a graph whose description is distributed among many sites. We present a distributed algorithm that works in a few synchronous rounds, requires limited communication complexity, and achieves the same guarantees of Spectral Clustering as long as the clusters are balanced in size.
APA, Harvard, Vancouver, ISO, and other styles
7

MUPPIDI, SRINIVAS REDDY. "GENETIC ALGORITHMS FOR MULTI-OBJECTIVE PARTITIONING." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1080827924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vijaya, Satya Ravi. "ALGORITHMS FOR HAPLOTYPE INFERENCE AND BLOCK PARTITIONING." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2490.

Full text
Abstract:
The completion of the human genome project in 2003 paved the way for studies to better understand and catalog variation in the human genome. The International HapMap Project was started in 2002 with the aim of identifying genetic variation in the human genome and studying the distribution of genetic variation across populations of individuals. The information collected by the HapMap project will enable researchers in associating genetic variations with phenotypic variations. Single Nucleotide Polymorphisms (SNPs) are loci in the genome where two individuals differ in a single base. It is estimated that there are approximately ten million SNPs in the human genome. These ten million SNPS are not completely independent of each other - blocks (contiguous regions) of neighboring SNPs on the same chromosome are inherited together. The pattern of SNPs on a block of the chromosome is called a haplotype. Each block might contain a large number of SNPs, but a small subset of these SNPs are sufficient to uniquely dentify each haplotype in the block. The haplotype map or HapMap is a map of these haplotype blocks. Haplotypes, rather than individual SNP alleles are expected to effect a disease phenotype. The human genome is diploid, meaning that in each cell there are two copies of each chromosome - i.e., each individual has two haplotypes in any region of the chromosome. With the current technology, the cost associated with empirically collecting haplotype data is prohibitively expensive. Therefore, the un-ordered bi-allelic genotype data is collected experimentally. The genotype data gives the two alleles in each SNP locus in an individual, but does not give information about which allele is on which copy of the chromosome. This necessitates computational techniques for inferring haplotypes from genotype data. This computational problem is called the haplotype inference problem. Many statistical approaches have been developed for the haplotype inference problem. Some of these statistical methods have been shown to be reasonably accurate on real genotype data. However, these techniques are very computation-intensive. With the international HapMap project collecting information from nearly 10 million SNPs, and with association studies involving thousands of individuals being undertaken, there is a need for more efficient methods for haplotype inference. This dissertation is an effort to develop efficient perfect phylogeny based combinatorial algorithms for haplotype inference. The perfect phylogeny haplotyping (PPH) problem is to derive a set of haplotypes for a given set of genotypes with the condition that the haplotypes describe a perfect phylogeny. The perfect phylogeny approach to haplotype inference is applicable to the human genome due to the block structure of the human genome. An important contribution of this dissertation is an optimal O(nm) time algorithm for the PPH problem, where n is the number of genotypes and m is the number of SNPs involved. The complexity of the earlier algorithms for this problem was O(nm^2). The O(nm) complexity was achieved by applying some transformations on the input data and by making use of the FlexTree data structure that has been developed as part of this dissertation work, which represents all the possible PPH solution for a given set of genotypes. Real genotype data does not always admit a perfect phylogeny, even within a block of the human genome. Therefore, it is necessary to extend the perfect phylogeny approach to accommodate deviations from perfect phylogeny. Deviations from perfect phylogeny might occur because of recombination events and repeated or back mutations (also referred to as homoplasy events). Another contribution of this dissertation is a set of fixed-parameter tractable algorithms for constructing near-perfect phylogenies with homoplasy events. For the problem of constructing a near perfect phylogeny with q homoplasy events, the algorithm presented here takes O(nm^2+m^(n+m)) time. Empirical analysis on simulated data shows that this algorithm produces more accurate results than PHASE (a popular haplotype inference program), while being approximately 1000 times faster than phase. Another important problem while dealing real genotype or haplotype data is the presence of missing entries. The Incomplete Perfect Phylogeny (IPP) problem is to construct a perfect phylogeny on a set of haplotypes with missing entries. The Incomplete Perfect Phylogeny Haplotyping (IPPH) problem is to construct a perfect phylogeny on a set of genotypes with missing entries. Both the IPP and IPPH problems have been shown to be NP-hard. The earlier approaches for both of these problems dealt with restricted versions of the problem, where the root is either available or can be trivially re-constructed from the data, or certain assumptions were made about the data. We make some novel observations about these problems, and present efficient algorithms for unrestricted versions of these problems. The algorithms have worst-case exponential time complexity, but have been shown to be very fast on practical instances of the problem.
Ph.D.
Other
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Martin, Nicolas. "Network partitioning algorithms with scale-free objective." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT001.

Full text
Abstract:
En raison de la complexité inhérente à l’analyse de réseau de très grande taille, l’élaboration d’algorithmes de partitionnement et diverses problématiques connexes sont traitées au long de cette thèse. Dans un premier temps, une question préliminaire est traitée: puisque les nœuds au sein d’une partie ne sont pas nécessairement connexes, comment quantifier l’impact d’une contrainte de connexité ? Nous proposons ensuite un algorithme de partitionnement assurant que le réseau réduit soit scale-free. Ceci permet de tirer profit des propriétés intrinsèques de ce type de réseaux. Nous nous intéressons également aux propriétés à préserver pour respecter la nature physique et dynamique du réseau initial. Dans une troisième partie, nous proposons une méthode pour identifier les nœuds à mesurer dans un réseau pour garantir une reconstruction efficace de la valeur moyenne des autre nœuds. Finalement, nous proposons trois applications: la première concerne le trafic routier et nous montrons que notre premier algorithme de partitionnement permet d’obtenir un réseau réduit émulant efficacement le réseau initial. Les deux autres applications concernent les réseaux d’épidémiologie. Dans la première nous montrons qu’un réseau réduit scale-free permet de construire une stratégie efficace d’attribution de soin au sein d’une population. Dans la dernière application, nous tirons profit des résultats sur la reconstruction de moyenne pour estimer l’évolution d’une épidémie dans un réseau de grande taille
In light of the complexity induced by large-scale networks, the design of network partitioning algorithms and related problematics are at the heart of this thesis. First, we raise a preliminary question on the structure of the partition itself: as the parts may includes disconnected nodes, we want to quantify the drawbacks to impose the nodes inside each part to be connected. Then we study the design of a partitioning algorithm inducing a reduced scale-free network. This allows to take advantage of the inherent features of this type of network. We also focus on the properties to preserve to respect the physical and dynamical profile of the initial network. We investigate then how to partition a network between measured and unmeasured nodes ensuring that the average of the unmeasured nodes can be efficiently reconstructed. In particular we show that, under hypothesis, this problem can be reduced to a problem of detection of subgraph with particular properties. Methods to achieve this detection are proposed. Finally, three applications are presented: first we apply the partitioning algorithm inducing scale-freeness to a large-scale urban traffic network. We show then that, thanks to the properties preserved through the partition, the reduced network can be used as an abstraction of the initial network. The second and third applications deal with network epidemics. First, we show that the scale-freeness of the abstracting network can be used to build a cure-assignation strategy. In the last application, we take advantage of the result on average reconstruction to estimate the evolution of a disease on a large-scale network
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Huiqun. "Circuit partitioning algorithms for CAD VLSI design /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Oladeji, Olamide. "Network partitioning algorithms for electricity consumer clustering." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/122917.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2018
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 97-103).
In many developing countries, access to electricity remains a significant challenge. Electrification planners in these countries often have to make important decisions on the mode of electrification and the planning of electrical networks for those without access, while under resource constraints. To facilitate the achievement of universal energy access, the Reference Electrification Model (REM), a computational model capable of providing techno-economic analysis and data-driven decision support for these planning efforts, has been developed. Primary among REM's capabilities is the recommendation of the least-cost mode of electrification - i.e by electric grid extension or off-grid systems - for non-electrified consumers in a region under analysis, while considering technical, economic and environmental constraints.
This is achieved by the identification of consumer clusters (either as clusters of off-grid microgrids, stand-alone systems or grid-extension projects) using underlying clustering methods in the model. This thesis focuses on the development and implementation of partitioning algorithms to achieve this purpose. Building on previously implemented efforts on the clustering and recommendation capabilities of REM, this work presents the development, analysis and performance evaluation of alternative approaches to the consumer clustering process, in comparison with REM's previously incorporated clustering methodology. Results show that the alternative methodology proposed can compare favorably with the hitherto implemented method in REM. Consequently, the integration of the pro- posed network partitioning procedures within REM, as well as some potential future research directions, is discussed.
Finally, this thesis concludes with a discourse on the social and regulatory aspects of energy access and electricity planning in developing countries, providing some perspectives on the development policies and business models that complement the technological contributions of this work.
by Olamide Oladeji.
S.M. in Technology and Policy
S.M.
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
12

Schwartz, Victor Scott. "Dynamic platform-independent meta-algorithms for graph-partitioning." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA356541.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1998.
Thesis advisor(s): Gordon H. Bradley. "September 1998." Includes bibliographical references (p. 99-100). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
13

Mehrotra, Anuj. "Constrained graph partitioning : decomposition, polyhedral structure and algorithms." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/24234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gwalani, Harsha. "Spatial Partitioning Algorithms for Solving Location-Allocation Problems." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609062/.

Full text
Abstract:
This dissertation presents spatial partitioning algorithms to solve location-allocation problems. Location-allocations problems pertain to both the selection of facilities to serve demand at demand points and the assignment of demand points to the selected or known facilities. In the first part of this dissertation, we focus on the well known and well-researched location-allocation problem, the "p-median problem", which is a distance-based location-allocation problem that involves selection and allocation of p facilities for n demand points. We evaluate the performance of existing p-median heuristic algorithms and investigate the impact of the scale of the problem, and the spatial distribution of demand points on the performance of these algorithms. Based on the results from this comparative study, we present guidelines for location analysts to aid them in selecting the best heuristic and corresponding parameters depending on the problem at hand. Additionally, we found that existing heuristic algorithms are not suitable for solving large-scale p-median problems in a reasonable amount of time. We present a density-based decomposition methodology to solve large-scale p-median problems efficiently. This algorithm identifies dense clusters in the region and uses a MapReduce procedure to select facilities in the clustered regions independently and combine the solutions from the subproblems. Lastly, we present a novel greedy heuristic algorithm to solve the contiguity constrained fixed facility demand distribution problem. The objective of this problem is to create contiguous service areas for the facilities such that the demand at all facilities is uniform or proportional to the available resources, while the distance between demand points and facilities is minimized. The results in this research are shown in the context of creating emergency response plans for bio-emergencies. The algorithms are used to select Point of Dispensing (POD) locations (if not known) and map them to population regions to ensure that all affected individuals are assigned to a POD facility.
APA, Harvard, Vancouver, ISO, and other styles
15

Chandrasekhar, Suresh. "Partitioning Methods and Algorithms for Configurable Computing Machines." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36909.

Full text
Abstract:
This thesis addresses the partitioning problem for configurable computing machines. Specifically, this thesis presents algorithms to partition chain-structured task graphs across configurable computing machines. The algorithms give optimal solutions for throughput and total execution time for these problems under constraints on area, pin count, and power consumption. The algorithms provide flexibility for applying these constraints while remaining polynomial in complexity. Proofs of correctness as well as an analysis of runtime complexity are given. Experiments are performed to illustrate the runtime of these algorithms.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Khan, Shoab Ahmad. "Logic and algorithm partitioning." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/13738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Andersen, Reid. "Local algorithms for graph partitioning and finding dense subgraphs." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3259059.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed June 11, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 92-95).
APA, Harvard, Vancouver, ISO, and other styles
18

Leija, Antonio M. "AN INVESTIGATION INTO PARTITIONING ALGORITHMS FOR AUTOMATIC HETEROGENEOUS COMPILERS." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1546.

Full text
Abstract:
Automatic Heterogeneous Compilers allows blended hardware-software solutions to be explored without the cost of a full-fledged design team, but limited research exists on current partitioning algorithms responsible for separating hardware and software. The purpose of this thesis is to implement various partitioning algorithms onto the same automatic heterogeneous compiler platform to create an apples to apples comparison for AHC partitioning algorithms. Both estimated outcomes and actual outcomes for the solutions generated are studied and scored. The platform used to implement the algorithms is Cal Poly’s own Twill compiler, created by Doug Gallatin last year. Twill’s original partitioning algorithm is chosen along with two other partitioning algorithms: Tabu Search + Simulated Annealing (TSSA) and Genetic Search (GS). These algorithms are implemented inside Twill and test bench input code from the CHStone HLS Benchmark tests is used as stimulus. Along with the algorithms cost models, one key attribute of interest is queue counts generated, as the more cuts between hardware and software requires queues to pass the data between partition crossings. These high communication costs can end up damaging the heterogeneous solution’s performance. The Genetic, TSSA, and Twill’s original partitioning algorithm are all scored against each other’s cost models as well, combining the fitness and performance cost models with queue counts to evaluate each partitioning algorithm. The solutions generated by TSSA are rated as better by both the cost model for the TSSA algorithm and the cost model for the Genetic algorithm while producing low queue counts.
APA, Harvard, Vancouver, ISO, and other styles
19

Samaranayake, Thellamurege Meththa. "Force directed algorithms for integrated circuit module placement." Thesis, Manchester Metropolitan University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zhuo, Yue. "Timing and Congestion Driven Algorithms for FPGA Placement." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5423/.

Full text
Abstract:
Placement is one of the most important steps in physical design for VLSI circuits. For field programmable gate arrays (FPGAs), the placement step determines the location of each logic block. I present novel timing and congestion driven placement algorithms for FPGAs with minimal runtime overhead. By predicting the post-routing timing-critical edges and estimating congestion accurately, this algorithm is able to simultaneously reduce the critical path delay and the minimum number of routing tracks. The core of the algorithm consists of a criticality-history record of connection edges and a congestion map. This approach is applied to the 20 largest Microelectronics Center of North Carolina (MCNC) benchmark circuits. Experimental results show that compared with the state-of-the-art FPGA place and route package, the Versatile Place and Route (VPR) suite, this algorithm yields an average of 8.1% reduction (maximum 30.5%) in the critical path delay and 5% reduction in channel width. Meanwhile, the average runtime of the algorithm is only 2.3X as of VPR.
APA, Harvard, Vancouver, ISO, and other styles
21

Alemany, Juan A. "Data placement algorithms for news-on-demand servers /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Serdar, Tatjana. "Automatic datapath tile placement and routing /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/6110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sazonova, Nadezhda A. "Parsimony-based genetic algorithm for haplotype resolution and block partitioning." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5499.

Full text
Abstract:
Thesis (Ph. D.)--West Virginia University, 2007.
Title from document title page. Document formatted into pages; contains xi, 127 p. : ill. Includes abstract. Includes bibliographical references (p. 109-114).
APA, Harvard, Vancouver, ISO, and other styles
24

Bobda, Christophe. "Synthesis of dataflow graphs for reconfigurable systems using temporal partitioning and temporal placement." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968530567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Prasad, Abhijit. "Power supply partitioning for placement of built-in current sensors for IDDQ testing." Texas A&M University, 2003. http://hdl.handle.net/1969.1/103.

Full text
Abstract:
IDDQ testing has been a very useful test screen for CMOS circuits. However, with each technology node the background leakage of chips is rapidly increasing. As a result it is becoming more difficult to distinguish between faulty and fault-free chips using IDDQ testing. Power supply partitioning has been proposed to increase test resolution by partitioning the power supply network, such that each partition has a relatively small defect-free IDDQ level. However, at present no practical partitioning strategy is available. The contribution of this thesis is to present a practical power supply partitioning strategy. We formulate various versions of the power supply partitioning problem that are likely to be of interest depending on the constraints of the chip design. Solutions to all the variants of the problem are presented. The basic idea behind all solutions is to abstract the power topology of the chip as a flow network. We then use flow techniques to find the min-cut of the transformed network to get solutions to our various problem formulations. Experimental results for benchmark circuits verify the feasibility of our solution methodology. The problem formulations will give complete flexibility to a test engineer to decide which factors cannot be compromised (e.g. area of BICS, test quality, etc) for a particular design and accordingly choose the appropriate problem formulation. The application of this work will be the first step in the placement of Built-In Current Sensors for IDDQ testing.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Dachuan. "Improving Performance in Data Processing Distributed Systems by Exploiting Data Placement and Partitioning." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1483312415041341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Addanki, Ravichandra. "Learning generalizable device placement algorithms for distributed machine learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122746.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 47-50).
We present Placeto, a reinforcement learning (RL) approach to efficiently find device placements for distributed neural network training. Unlike prior approaches that only find a device placement for a specific computation graph, Placeto can learn generalizable device placement policies that can be applied to any graph. We propose two key ideas in our approach: (1) we represent the policy as performing iterative placement improvements, rather than outputting a placement in one shot; (2) we use graph embeddings to capture relevant information about the structure of the computation graph, without relying on node labels for indexing. These ideas allow Placeto to train efficiently and generalize to unseen graphs. Our experiments show that Placeto requires up to 6.1 x fewer training steps to find placements that are on par with or better than the best placements found by prior approaches. Moreover, Placeto is able to learn a generalizable placement policy for any given family of graphs, which can then be used without any retraining to predict optimized placements for unseen graphs from the same family. This eliminates the large overhead incurred by prior RL approaches whose lack of generalizability necessitates re-training from scratch every time a new graph is to be placed.
by Ravichandra Addanki.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
28

Enciso, Rosa. "Alliances in Graphs: Parameterized Algorithms and on Partitioning Series-Parallel Graphs." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2479.

Full text
Abstract:
Alliances are used to denote agreements between members of a group with similar interests. Alliances can occur between nations, biological sequences, business cartels, and other entities. The notion of alliances in graphs was first introduced by Kristiansen, Hedetniemi, and Hedetniemi in . A defensive alliance in a graph G=(V,E) is a non empty set S⊆V S where, for all x ∈S, |N∩S|≥|N-S|. Consequently, every vertex that is a member of a defensive alliance has at least as many vertices defending it as there are vertices attacking it. Alliances can be used to model a variety of applications such as classification problems, communities in the web distributed protocols, etc . In , Gerber and Kobler introduced the problem of partitioning a graph into strong defensive alliances for the first time as the "Satisfactory Graph Partitioning (SGP)" problem. In his dissertation , Shafique used the problem of partitioning a graph into alliances to model problems in data clustering. Decision problems for several types of alliances and alliance partitions have been shown to be NP-complete. However, because of their applicability, it is of interest to study methods to overcome the complexity of these problems. In this thesis, we will present a variety of algorithms for finding alliances in different families of graphs with a running time that is polynomial in terms of the size of the input, and allowing exponential running time as a function of a chosen parameter. This study is guided by the theory of parameterized complexity introduced by Rod Downey and Michael Fellows in . In addition to parameterized algorithms for alliance related problems, we study the partition of series-parallel graphs into alliances. The class of series-parallel graphs is a special class in graph theory since many problems known to be NP-complete on general graphs have been shown to have polynomial time algorithms on series-parallel graphs. For example, the problem of finding a minimum defensive alliance has been shown to have a linear time algorithm when restricted to series-parallel graphs . Series-parallel graphs have also been to focus of study in a wide range of applications including CMOS layout and scheduling problems [ML86, Oud97]. Our motivation is driven by clustering properties that can be modeled with alliances. We observe that partitioning series-parallel graphs into alliances of roughly the same size can be used to partition task graphs to minimize the communication between processors and balance the workload of each processor. We present a characterization of series-parallel graphs that allow a partition into defensive alliances and a subclass of series-parallel graphs with a satisfactory partitions.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
29

Enciso, Rosa I. "Alliances in graphs parameterized algorithms and on partitioning series-parallel graphs /." Orlando, Fla. : University of Central Florida, 2009. http://purl.fcla.edu/fcla/etd/CFE0002956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Obenaus, Stefan Thomas Henning. "Fast placement algorithms for grids in two and three dimensions." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=37744.

Full text
Abstract:
Topic of this dissertation is the fast placement of hypergraph nodes into multi-dimensional grids. Hypergraphs can be used to model electronic circuits and network topologies, and grids can model the underlying architectures of electronic integrated circuits and optical systems. Thus, placement of hypergraphs into grids is an important problem as it may serve, for example, to reduce interconnection wire length in integrated circuits, or to minimize communication delays incurred in optical networks by routing around unnecessary bends. Fast placement techniques that complete in near-linear run time are desirable as circuit and network topologies are expected to continue to grow in size exponentially. The term "fast placement" thus refers to (a) the run time of the placement algorithm, and (b) the lower transmission delays in the resulting placement.
Original research contributions are provided in chapters 1, 4, and 5.
In chapter 1, the extension of relevant combinatorial grid placement problems to hypergraphs and multi-dimensional grids constitute a modest original contribution.
Entirely original work is presented in chapter 4, where we introduce a novel force-directed iterative placement algorithm for two- and three-dimensional placements. This algorithm is designed to produce good minimum-wire-length placements for large circuits, for which no comparison results exist. The three-dimensional implementation of our placement algorithm pioneers the 3-D placement field in which previously no efficient algorithms had been published. In order to avoid operating in a vacuum, we were forced to create a comparison algorithm based on an accepted standard placement technique. With this reference placer, we generated the first published 3-D placement results, and 2-D placement results for large benchmark circuits for which no published comparison results exist. Our placement algorithm out-performs the reference placer substantially in both run time and wire-length. Further, we use our algorithm to present some experimental evidence of the estimated wire-length savings when utilizing the third dimension.
Our final original contribution is a method presented in chapter 5 for efficiently placing a modern network topology, the star graph, into multi-dimensional grids such that all star graph neighbours are joined by a common grid line. The basic placement technique, originally published in [Obe95], is made efficient by compacting and contracting the bendless embedding in an effective manner.
APA, Harvard, Vancouver, ISO, and other styles
31

Kashyap, Srinivas Raaghav. "Algorithms for data placement, reconfiguration and monitoring in storage networks." College Park, Md.: University of Maryland, 2007. http://hdl.handle.net/1903/7583.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Dept. of Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
32

Korupolu, Madhukar. "Placement algorithms for hierarchical cooperative caching and other location problems /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
Abstract:
Thesis (Ph. D.)--University of Texas at Austin, 1999.
Vita. Includes bibliographical references (leaves 143-150), Copy 2 (p. 135-142). Available also in a digital version from Dissertation Abstracts.
APA, Harvard, Vancouver, ISO, and other styles
33

Lindmark, Gustav. "Methods and algorithms for control input placement in complex networks." Licentiate thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150886.

Full text
Abstract:
The control-theoretic notion of controllability captures the ability to guide a systems behavior toward a desired state with a suitable choice of inputs. Controllability of complex networks such as traffic networks, gene regulatory networks, power grids etc. brings many opportunities. It could for instance enable improved efficiency in the functioning of a network or lead to that entirely new applicative possibilities emerge. However, when control theory is applied to complex networks like these, several challenges arise. This thesis consider some of these challenges, in particular we investigate how control inputs should be placed in order to render a given network controllable at a minimum cost, taking as cost function either the number of control inputs or the energy that they must exert. We assume that each control input targets only one node (called a driver node) and is either unconstrained or unilateral. A unilateral control input is one that can assume either positive or negative values but not both. Motivated by the many applications where unilateral controls are common, we reformulate classical controllability results for this particular case into a more computationally-efficient form that enables a large scale analysis. We show that the unilateral controllability problem is to a high degree structural and derive theoretical lower bounds on the minimal number of unilateral control inputs from topological properties of the network, similar to the bounds that exists for the minimal number of unconstrained control inputs. Moreover, an algorithm is developed that constructs a near minimal number of control inputs for a given network. When evaluated on various categories of random networks as well as a number of real-world networks, the algorithm often achieves the theoretical lower bounds. A network can be controllable in theory but not in practice when completely unreasonable amounts of control energy are required to steer it in some direction. For unconstrained control inputs we show that the control energy depends on the time constants of the modes of the network, and that the closer the eigenvalues are to the imaginary axis of the complex plane, the less energy is required for control. We also investigate the problem of placing driver nodes such that the control energy requirements are minimized (assuming that theoretical controllability is not an issue). For the special case with networks having all purely imaginary eigenvalues, several constructive algorithms for driver node placement are developed. In order to understand what determines the control energy in the general case with arbitrary eigenvalues, we define two centrality measures for the nodes based on energy flow considerations: the first centrality reflects the network impact of a node and the second the ability to control it indirectly. It turns out that whether a node is suitable as driver node or not largely depends on these two qualities. By combining the centralities into node rankings we obtain driver node placements that significantly reduce the control energy requirements and thereby improve the “practical degree of controllability”.

Minor corrections are made in the electronic version of the thesis (Abstract). / Mindre korreigeringar är gjorda i den elektroniska versionen av avhandlingen (i Abstract).

APA, Harvard, Vancouver, ISO, and other styles
34

Pardella, Gregor L. [Verfasser]. "Efficient Polynomial-Time Algorithms for Special Graph Partitioning Problems / Gregor L. Pardella." München : Verlag Dr. Hut, 2011. http://d-nb.info/1015604919/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Farrag, Lamis M. "Applications of graph partitioning algorithms to terrain visibility and shortest path problems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ32386.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gadde, Srimanth. "Graph Partitioning Algorithms for Minimizing Inter-node Communication on a Distributed System." University of Toledo / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1376561814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Menegola, Bruno. "A study of the k-way graph partitioning problem." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/67181.

Full text
Abstract:
O problema de particionamento balanceado de grafos consiste em encontrar uma partição de tamanho k dos vértices de um grafo, minimizando o número de arestas que participam do corte tal que o tamanho de nenhuma parte exceda [en~k], para algum e e > [1, k). Essa dissertação estuda esse problema, apresentando uma revisão recente de heurísticas construtivas, heurísticas de refinamento e técnicas multinível. Também propomos um novo algoritmo híbrido para resolver esse problema de particionamento. Nós mostramos como diversas estratégias para construir e aprimorar partições, assim como algumas novas propostas, podem ser integradas para formar um GRASP com path-relinking. Reportamos experimentos computacionais que mostram que essa abordagem obtém soluções competitivas com particionadores no estado-da-arte. Em particular, o algoritmo híbrido é capaz de encontrar novos melhores valores conhecidos em algumas das menores instâncias, indicando que tem uma contribuição qualitativa comparado aos métodos existentes.
The balanced graph partitioning problem asks to find a k-partition of the vertex set of an undirected graph, which minimizes the total cut size and such that the size of no part exceeds en/k , for some ee > [1, k]. This dissertation studies this problem, providing a recent review of constructive heuristics, refinement heuristics and multilevel techniques. We also propose a new hybrid algorithm for solving this partitioning problem. We show how several good existing strategies for constructing and improving partitions, as well as some newly proposed ones, can be integrated to form a GRASP with path-relinking. We report computational experiments that show that this approach obtains solutions competitive with state-of-the-art partitioners. In particular, the hybrid algorithm is able to find new best known values in some of the smaller instances, indicating that it can make a qualitative contribution compared to existing methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Cooklis, John T. "CPGA : a two-dimensional order-based algorithm for cell placement /." Online version of thesis, 1991. http://hdl.handle.net/1850/10709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tiberg, Jesper. "An evaluation of algorithms for real-time strategic placement of sensors." Thesis, University of Skövde, School of Humanities and Informatics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-865.

Full text
Abstract:

In this work an investigation is performed in whether the algorithms Simultaneous Perturbation Stochastic Approximation (SPSA) and Virtual Force Algorithm (VFA) are suitable for real-time strategic placement of sensors in a dynamic environment. An evaluation of these algorithms is conducted and compared to Simulated Annealing (SA), which has been used before in similar applications.

For the tests, a computer based model of the sensors and the environment in which they are used, is implemented. The model handles sensors, moving objects, specifications for the area the sensors are supposed to monitor, and all interaction between components within the model.

It was the belief of the authors that SPSA and VFA are suited for this kind of problem, and that they have advantages over SA in complex scenarios. The results shows this to be true although SA seems to perform better when it comes to smaller number of sensors to be placed

APA, Harvard, Vancouver, ISO, and other styles
40

Eatmon, Dedra. "Evaluating Placement Algorithms with the DREAM Framework for Reconfigurable Hardware Devices." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000807-062016.

Full text
Abstract:

The field programmable gate array (FPGA) has become one of the most utilized configurable devices in the area of reconfigurable computing. FPGAs have alarge amount of flexibility and provide a high degree of parallel computing capability. Since their introduction in the 1980's, these configurable logicdevices have experienced a dramatic increase in programming capabilities and performance. Both factors have been significant in the changing roles ofconfigurable devices in custom-computing machines. However, the improvements in capability and performance have not eliminated the issues related toefficient placement of applications on these devices.

This thesis presents a tool that evaluates placement algorithms for configurable logic devices. Written in Java, the tool is a framework in whichvarious placement algorithms can be executed and the performance and quality ofeach placement evaluated using a cost function. Based on devices thatsupport relocatable hardware modules (RHMs), the tool places modules with user-specified placement algorithms and provides feedback that can be usedfor a comparative analysis. The framework manages module mappings to the logicdevice that are both independent of each other and do not requirepin-to-pin routing connections. Such a tool is valuable for the identification of effective placement algorithms for real-time placement of RHMs in run-time reconfigurable systems.

The Dynamic Resource Allocation and Management (DREAM) framework, has been designed and developed to evaluate FPGA placement algorithms/heuristics. Aportion of the evaluation is based on a simplistic cost function that calculates the amount of contiguous unused space remaining on the device intwo dimensions. In our experiments, we use an FPGA logic core generator to produce several rectangular RHMs. In addition to the rectangular RHMs producedby the logic core generation tool, our framework can handle arbitrary circuit profiles. Several scenarios consisting of approximately 500insertions/deletions of both rectangular and non-rectangular RHMs are used as test data sets for placement. Three placement algorithms are presented todemonstrate the flexibility of the framework. The first algorithm tested in the DREAM framework is a random placement algorithm. The second algorithm isan adaptation of a traditional best-fit algorithm that we call exhaustivesearch. The third algorithm is a modified version of first-fit.Future work will involve the development of additional placement algorithms andthe incorporation ofplacement issues that relate to requests for central reconfigurable computing resources originating from a remote site.

The DREAM framework answers the call for a tool that is sorely needed to identify placement algorithms that can be effectively used for real-timeplacement. In addition to providing results that can be used to benchmark the performance of placement algorithms in real-time on a configurablesystem, this tool also allows the end-user methods to store and load placementsfor future optimization. By taking full advantage of the partial andfull dynamic reconfiguration capabilities of logic devices currently used in run-time reconfigurable systems, the goal of DREAM is to provide a tool with whichthe quality of placement algorithms can be quantified and compared.

APA, Harvard, Vancouver, ISO, and other styles
41

Dong, Shaoqiang Agrawal Prathima. "Node placement, routing and localization algorithms for heterogeneous wireless sensor networks." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Electrical_and_Computer_Engineering/Thesis/Dong_Shaoqiang_40.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

HANDA, MANISH. "ONLINE PLACEMENT AND SCHEDULING ALGORITHMS AND METHODOLOGIES FOR RECONFIGURABLE COMPUTING SYSTEMS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1100030953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sensen, Norbert. "Lower bounds and exact algorithms for the graph partitioning problem using multicommodity flows." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=971568243.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Jacobson, Jay Alan. "State space partitioning methods for solving a class of stochastic network problems." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/24928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Agnihotri, Ameya Ramesh. "Combinatorial optimization techniques for VLSI placement." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

El-Darzi, E. "Methods for solving the set covering and set partitioning problems using graph theoretic (relaxation) algorithms." Thesis, Brunel University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Schultz, Xavier da Silveira Luís Fernando. "Turán Triangles, Cell Covers, Road Placement and Train Scheduling." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40569.

Full text
Abstract:
In this doctoral thesis, four questions related to computational geometry are considered. The first is an extremal combinatorics question regarding triangles with vertices taken from a set of n points in convex position. More precisely, two such triangles can exhibit eight distinct configurations and, for each subset of these configurations, we are interested in the asymptotics of how many triangles one can have while avoiding configurations in the subset (as a function of n). For most of these subsets, we answer this question optimally up to a logarithmic factor in the form of several Turán-type theorems. The answers for the remaining few were in turn tied to that of a long-standing open problem which appeared in the literature in the contexts of monotone matrices, tripod packing and 2-comparable sets. The second problem, called Line Segment Covering (LSC), is about covering the cells of an arrangement of line segments with these line segments, where a segment covers the cells it is incident to. Recently, a PTAS, an APX -hardness proof and a FPT algorithm for variants of this problem have been shown. This paper and a new slight generalization of one of its results is included as a chapter. The third problem has been posed in the Sixth Annual Workshop on Geometry and Graphs and concerns the design of road networks to minimize the maximum travel time between two point sets in the plane. Traveling outside the roads costs more time per unit of distance than traveling on the roads and the total length of the roads can not exceed a budget. When the point sets are the opposing sides of a unit square and the budget is at most √2, we were able to come up with a few network designs that cover all possible cases and are provably optimal. Furthermore, when both point sets are the boundary of a unit circle, we managed to disprove the natural conjecture that a concentric circle is an optimal design. Finally, we consider collision-avoiding schedules of unit-velocity axis-aligned trains departing and arriving from points in the integer lattice. We prove a few surprising results on the existence of constant upper bounds on the maximum delay that are independent of the train network. In particular, these upper bounds are shown to always exist in two dimensions and to exist in three dimensions for unit-length trains. We also showed computationally that, for several scenarios, these upper bounds are tight.
APA, Harvard, Vancouver, ISO, and other styles
48

Mofya, Enock Chisonge. "Exact and Heuristic Algorithms for Solving the Generalized Minimum Filter Placement Problem." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1311%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hsueh, Tsu-Yun, and 薛祖雲. "Partitioning Related Research and Placement-Aware Partitioning Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/88545870473406146763.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
99
Abstract As the process technology progress, there are more and more components on a chip. Therefore, how to partition the circuit effectively is a very important issue. Partitioning-based placement is a common placement method. A good partition method can get a good placement result. In our study of previous research, we find the partitioning methods [13][18][19] can not effectively consider the connection of external nets in global view. Therefore, we propose a new partitioning method which can consider the connection of external nets and minimize the number of cuts in the partitioning area. In the past, we have done partitioning research for 3D ICs. We implement two partitioning methods which are two-way partition and k-way partition under 3D ICs to compare the performance of the two methods. As shown in the experimental results, on average, the two-way partition is 69% less than k-way partition in terms of TSV number and 4.35x faster in terms of run time. According to the experimental results, the performance of two-way partition is better than k-way partition. Therefore, we use the two-way partitioning method to develop a Placement-aware Partitioning algorithm. We use our algorithm to compare with two kinds of methods. The one of the methods is do partition with traditional terminal propagation, another one is without terminal propagation. As shown in the experimental results, on average, the result of our algorithm is 10% and 37% shorter in terms of HPWL (Half-perimeter Wirelength), 9% and 31% shorter in terms of STWL (Steiner Tree Wirelength), respectively. And we use a Golden Router which is provide by 2011 ISPD contest to route the result, on average, our wirelength is shorter than those two methods 36% and 55%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
50

Yuan, Ming-Huan, and 袁明煥. "Global Placement by Circuit Partitioning with Hyperedge Clique ModelingGlobal Placement by Circuit Partitioning with Hyperedge Clique Modeling." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/39026823847514684108.

Full text
Abstract:
碩士
元智大學
資訊工程學系
93
As VLSI technology advances, more transistors will be on a chip. In standard cell designs, transistors are grouped into cells of the same height to perform some basic logic function. There will be more than a million cells in a VLSI design. Placement of these cells on a chip is increasingly difficult, but bearing the responsibility of deciding the length of interconnects. It will certainly affect chip area, routability, performance, etc. Several placement algorithms have been proposed in the past. We choose the multi-level hierarchical technique for placement. hMetis is a software package for partitioning a large hypergraph based on multi-level hypergraph partitioning. This program provides high quality partition for VLSI designs. This program is fast than the other widely used partitioning algorithms. Clique is a complete subgraph contained in a larger graph. Clique is often used to convert a hyperedge into edges such that a hypergraph can be transformed into a graph. It has been found that clique modeling of hyperedges in circuit partitioning could lead to a cut size as small as that obtained by hMetis without using clique modeling and up to 15% reduction in external degree if a circuit is partitioned into a large number of parts. A placer is designed to place the results obtained from the partitioning of graphs with clique modeling. Global placement by circuit partitioning with hyperedge clique could obtained small wirelength than non-clique model by our experiments. The results shown up to 10% improve in the wirelength with clique model than non-clique model.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography