Academic literature on the topic 'Partitioning and placement algorithms'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Partitioning and placement algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Partitioning and placement algorithms"

1

Ababei, Cristinel. "Speeding Up FPGA Placement via Partitioning and Multithreading." International Journal of Reconfigurable Computing 2009 (2009): 1–9. http://dx.doi.org/10.1155/2009/514754.

Full text
Abstract:
One of the current main challenges of the FPGA design flow is the long processing time of the placement and routing algorithms. In this paper, we propose a hybrid parallelization technique of the simulated annealing-based placement algorithm of VPR developed in the work of Betz and Rose (1997). The proposed technique uses balanced region-based partitioning and multithreading. In the first step of this approach placement subproblems are created by partitioning and then processed concurrently by multiple worker threads that are run on multiple cores of the same processor. Our main goal is to investigate the speedup that can be achieved with this simple approach compared to previous approaches that were based on distributed computing. The new hybrid parallel placement algorithm achieves an average speedup of2.5×using four worker threads, while the total wire length and circuit delay after routing are minimally degraded.
APA, Harvard, Vancouver, ISO, and other styles
2

Areibi, Shawki, and Zhen Yang. "Effective Memetic Algorithms for VLSI Design = Genetic Algorithms + Local Search + Multi-Level Clustering." Evolutionary Computation 12, no. 3 (September 2004): 327–53. http://dx.doi.org/10.1162/1063656041774947.

Full text
Abstract:
Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Huiqun, Kai Zhu, and D. F. Wong. "FPGA Partitioning with Complex Resource Constraints." VLSI Design 11, no. 3 (January 1, 2000): 219–35. http://dx.doi.org/10.1155/2000/12198.

Full text
Abstract:
In this paper, we present an algorithm for circuit partitioning with complex resource constraints in large FPGAs. Traditional partitioning methods estimate the capacity of an FPGA device by counting the number of logic blocks, however this is not accurate with the increasing diverse resource types in the new FPGA architectures. We first propose a network flow based method to optimally check whether a circuit or a subcircuit is feasible for a set of available heterogeneous resources. Then the feasibility checking procedure is integrated in the FM-based algorithm for circuit partitioning. Incremental flow technique is employed for efficient implementation. Experimental results on the MCNC benchmark circuits show that our partitioning algorithm not only yields good results, but also is efficient. Our algorithm for partitioning with complex resource constraints is applicable for both multiple FPGA designs (e.g., logic emulation systems) and partitioning-based placement algorithms for a single large hierarchical FPGA (e.g., Actel's ES6500 FPGA family).
APA, Harvard, Vancouver, ISO, and other styles
4

Saab, Youssef. "A Fast Clustering-Based Min-Cut Placement Algorithm With Simulated-Annealing Performance." VLSI Design 5, no. 1 (January 1, 1996): 37–48. http://dx.doi.org/10.1155/1996/58084.

Full text
Abstract:
Placement is an important constrained optimization problem in the design of very large scale (VLSI) integrated circuits [1–4]. Simulated annealing [5] and min-cut placement [6] are two of the most successful approaches to the placement problem. Min-cut methods yield less congested and more routable placements at the expense of more wire-length, while simulated annealing methods tend to optimize more the total wire-length with little emphasis on the minimization of congestion. It is also well known that min-cut algorithms are substantially faster than simulated-annealing-based methods. In this paper, a fast min-cut algorithm (ROW-PLACE) for row-based placement is presented and is empirically shown to achieve simulated-annealing-quality wire-length on a number of benchmark circuits. In comparison with Timberwolf 6 [7], ROW-PLACE is at least 12 times faster in its normal mode and is at least 25 times faster in its faster mode. The good results of ROW-PLACE are achieved using a very effective clustering-based partitioning algorithm in combination with constructive methods that reduce the wire-length of nets involved in terminal propagation.
APA, Harvard, Vancouver, ISO, and other styles
5

Shanavas, I. Hameem, and R. K. Gnanamurthy. "Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm." Mathematical Problems in Engineering 2014 (2014): 1–15. http://dx.doi.org/10.1155/2014/809642.

Full text
Abstract:
In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
6

Yanpei, Liu, Li Chunlin, Yang Zhiyong, Chen Yuxuan, and Xu Lijun. "Performance Guarantee Mechanism for Multi-Tenancy SaaS Service Based on Kalman Filtering." Cybernetics and Information Technologies 15, no. 3 (September 1, 2015): 150–64. http://dx.doi.org/10.1515/cait-2015-0048.

Full text
Abstract:
Abstract This paper proposes a special System Architecture for Multi-tenancy SaaS Service (SAMSS), which studies the performance security issues at the business logic layer and data processing layer respectively. The Kalman filtering Admission Control algorithm (KAC) and the Greedy Copy Management algorithm (GCM) are proposed. At the business logic layer, Kalman filtering admission control algorithm is presented. It uses a Kalman filter to conduct the dynamic evaluation for the CPU resource for multi-tenancy SaaS service and reduces the unnecessary performance expenses caused by direct measurement of CPU resources. At the data processing layer, the Greedy Copy Management algorithm (GCM) is presented. It changes the copy placement as a K-partitioning set partitioning problem and adopts a greedy strategy to reduce the number of times for creating a data copy. Finally, the experimental analysis and results prove the feasibility and efficiency of the algorithms proposed.
APA, Harvard, Vancouver, ISO, and other styles
7

Sminesh, C. N., E. Grace Mary Kanaga, and A. G. Sreejish. "Augmented Affinity Propagation-Based Network Partitioning for Multiple Controllers Placement in Software Defined Networks." Journal of Computational and Theoretical Nanoscience 17, no. 1 (January 1, 2020): 228–33. http://dx.doi.org/10.1166/jctn.2020.8655.

Full text
Abstract:
Software Defined Networks (SDN) divide network intelligence and packet forwarding functionalities between control plane and data plane devices respectively. Multiple controllers need to be deployed in the control plane in large SDN networks to improve performance and scalability. In a multi-controller scenario, finding the adequate number of controllers and their load distribution are open research challenges. In a large-scale network, the control plane load balancing is termed a controller placement problem (CPP). It is observed that of the existing solutions for the CPP, clustering-based approaches provide computationally less intensive solutions. The proposed augmented affinity propagation (augmented-AP) clustering identifies the required number of network partitions and places the controllers such that the distribution of switches to the controller is much better than with existing algorithms. The simulation results show that the computed controller imbalance factor of augmented-AP algorithm outperforms the existing k-means algorithm.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Jiaqi, Yiqiang Sheng, and Haojiang Deng. "Two Optimization Algorithms for Name-Resolution Server Placement in Information-Centric Networking." Applied Sciences 10, no. 10 (May 22, 2020): 3588. http://dx.doi.org/10.3390/app10103588.

Full text
Abstract:
Information-centric networking (ICN) is an emerging network architecture that has the potential to address demands related to transmission latency and reliability in fifth-generation (5G) communication technology and the Internet of Things (IoT). As an essential component of ICN, name resolution provides the capability to translate identifiers into locators. Applications have different demands on name-resolution latency. To meet the demands, deploying name-resolution servers at the edge of the network by dividing it into multilayer overlay networks is effective. Moreover, optimization of the deployment of distributed name-resolution servers in such networks to minimize deployment costs is significant. In this paper, we first study the placement problem of the name-resolution server in ICN. Then, two algorithms called IIT-DOWN and IIT-UP are developed based on the heuristic ideas of inter-layer information transfer (IIT) and server reuse. They transfer server placement information and latency information between adjacent layers from different directions. Finally, experiments are conducted on both simulation networks and a real-world dataset. The experimental results reveal that the proposed algorithms outperform state-of-the-art algorithms such as the latency-aware hierarchical elastic area partitioning (LHP) algorithm in finding more cost-efficient solutions with a shorter execution time.
APA, Harvard, Vancouver, ISO, and other styles
9

Yun, Seung-kook, and Daniela Rus. "Distributed coverage with mobile robots on a graph: locational optimization and equal-mass partitioning." Robotica 32, no. 2 (December 18, 2013): 257–77. http://dx.doi.org/10.1017/s0263574713001148.

Full text
Abstract:
SUMMARYThis paper presents decentralized algorithms for coverage with mobile robots on a graph. Coverage is an important capability of multi-robot systems engaged in a number of different applications, including placement for environmental modeling, deployment for maximal quality surveillance, and even coordinated construction. We use distributed vertex substitution for locational optimization and equal mass partitioning, and the controllers minimize the corresponding cost functions. We prove that the proposed controller with two-hop communication guarantees convergence to the locally optimal configuration. We evaluate the algorithms in simulations and also using four mobile robots.
APA, Harvard, Vancouver, ISO, and other styles
10

Sreenivasa Rao, K., N. Swapna, and P. Praveen Kumar. "Educational data mining for student placement prediction using machine learning algorithms." International Journal of Engineering & Technology 7, no. 1.2 (December 28, 2017): 43. http://dx.doi.org/10.14419/ijet.v7i1.2.8988.

Full text
Abstract:
Data Mining is the process of extracting useful information from large sets of data. Data mining enablesthe users to have insights into the data and make useful decisions out of the knowledge mined from databases. The purpose of higher education organizations is to offer superior opportunities to its students. As with data mining, now-a-days Education Data Mining (EDM) also is considered as a powerful tool in the field of education. It portrays an effective method for mining the student’s performance based on various parameters to predict and analyze whether a student (he/she) will be recruited or not in the campus placement. Predictions are made using the machine learning algorithms J48, Naïve Bayes, Random Forest, and Random Tree in weka tool and Multiple Linear Regression, binomial logistic regression, Recursive Partitioning and Regression Tree (rpart), conditional inference tree (ctree) and Neural Network (nnet) algorithms in R studio. The results obtained from each approaches are then compared with respect to their performance and accuracy levels by graphical analysis. Based on the result, higher education organizations can offer superior training to its students.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Partitioning and placement algorithms"

1

Stan, Oana. "Placement of tasks under uncertainty on massively multicore architectures." Thesis, Compiègne, 2013. http://www.theses.fr/2013COMP2116/document.

Full text
Abstract:
Ce travail de thèse de doctorat est dédié à l'étude de problèmes d'optimisation combinatoire du domaine des architectures massivement parallèles avec la prise en compte des données incertaines tels que les temps d'exécution. On s'intéresse aux programmes sous contraintes probabilistes dont l'objectif est de trouver la meilleure solution qui soit réalisable avec un niveau de probabilité minimal garanti. Une analyse quantitative des données incertaines à traiter (variables aléatoires dépendantes, multimodales, multidimensionnelles, difficiles à caractériser avec des lois de distribution usuelles), nous a conduit à concevoir une méthode qui est non paramétrique, intitulée "approche binomiale robuste". Elle est valable quelle que soit la loi jointe et s'appuie sur l'optimisation robuste et sur des tests d'hypothèse statistique. On propose ensuite une méthodologie pour adapter des algorithmes de résolution de type approchée pour résoudre des problèmes stochastiques en intégrant l'approche binomiale robuste afin de vérifier la réalisabilité d'une solution. La pertinence pratique de notre démarche est enfin validée à travers deux problèmes issus de la compilation des applications de type flot de données pour les architectures manycore. Le premier problème traite du partitionnement stochastique de réseaux de processus sur un ensemble fixé de nœuds, en prenant en compte la charge de chaque nœud et les incertitudes affectant les poids des processus. Afin de trouver des solutions robustes, un algorithme par construction progressive à démarrages multiples a été proposé ce qui a permis d'évaluer le coût des solution et le gain en robustesse par rapport aux solutions déterministes du même problème. Le deuxième problème consiste à traiter de manière globale le placement et le routage des applications de type flot de données sur une architecture clustérisée. L'objectif est de placer les processus sur les clusters en s'assurant de la réalisabilité du routage des communications entre les tâches. Une heuristique de type GRASP a été conçue pour le cas déterministe, puis adaptée au cas stochastique clustérisé
This PhD thesis is devoted to the study of combinatorial optimization problems related to massively parallel embedded architectures when taking into account uncertain data (e.g. execution time). Our focus is on chance constrained programs with the objective of finding the best solution which is feasible with a preset probability guarantee. A qualitative analysis of the uncertain data we have to treat (dependent random variables, multimodal, multidimensional, difficult to characterize through classical distributions) has lead us to design a non parametric method, the so-called "robust binomial approach", valid whatever the joint distribution and which is based on robust optimization and statistical hypothesis testing. We also propose a methodology for adapting approximate algorithms for solving stochastic problems by integrating the robust binomial approach when verifying for solution feasibility. The paractical relevance of our approach is validated through two problems arising in the compilation of dataflow application for manycore platforms. The first problem treats the stochastic partitioning of networks of processes on a fixed set of nodes, by taking into account the load of each node and the uncertainty affecting the weight of the processes. For finding stochastic solutions, a semi-greedy iterative algorithm has been proposed which allowed measuring the robustness and cost of the solutions with regard to those for the deterministic version of the problem. The second problem consists in studying the global placement and routing of dataflow applications on a clusterized architecture. The purpose being to place the processes on clusters such that it exists a feasible routing, a GRASP heuristic has been conceived first for the deterministic case and afterwards extended for the chance constrained variant of the problem
APA, Harvard, Vancouver, ISO, and other styles
2

URGESE, GIANVITO. "Computational Methods for Bioinformatics Analysis and Neuromorphic Computing." Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2646486.

Full text
Abstract:
The latest biological discoveries and the exponential growth of more and more sophisticated biotechnologies led in the current century to a revolution that totally reshaped the concept of genetic study. This revolution, which began in the last decades, is still continuing thanks to the introduction of new technologies capable of producing a huge amount of biological data in a relatively short time and at a very low price with respect to some decades ago. These new technologies are known as Next Generation Sequencing (NGS). These platforms perform massively parallel sequencing of both RNA and DNA molecules, thus allowing to retrieve the nucleic acid sequence of millions of fragments of DNA or RNA in a single machine run. The introduction of such technologies rapidly changed the landscape of genetic research, providing the ability to answer questions with heretofore unimaginable accuracy and speed. Moreover, the advent of NGS with the consequent need for ad-hoc strategies for data storage, sharing, and analysis is transforming genetics in a big data research field. Indeed, the large amount of data coming from sequencing technologies and the complexity of biological processes call for novel computational tools (Bioinformatics tools) and informatics resources to exploit this kind of information and gain novel insights into human beings, living organisms, and pathologies mechanisms. At the same time, a new scientific discipline called Neuromorphic Computing has been established to develop SW/HW systems having brain-specific features, such as high degree of parallelism and low power consumption. These platforms are usually employed to support the simulation of the nervous system, thus allowing the study of the mechanisms at the basis of the brain functioning. In this scenario, my research program focused on the development of optimized HW/SW algorithms and tools to process the biological information from Bioinformatics and Neuromorphic studies. The main objective of the methodologies proposed in this thesis consisted in achieving a high level of sensitivity and specificity in data analysis while minimizing the computational time. To reach these milestones, then, some bottlenecks identified in the state-of-the-art tools have been solved through a careful design of three new optimised algorithms. The work that led to this thesis is part of three collaborative projects. Two concerning the design of Bioinformatics sequence alignment algorithms and one aimed at optimizing the resources usage of a Neuromorphic platform. In the next paragraphs, the projects are briefly introduced. Dynamic Gap Selector Project This project concerned the design and implementation of a new gap model implemented in the dynamic programming sequence alignment algorithms. Smith-Waterman (S-W) and Needleman-Wunsch (N-W) are widespread methods to perform Local and Global alignments of biological sequences such as proteins, DNA and RNA molecules that are represented such as sequences of letters. Both the algorithms make use of scoring procedures to evaluate matches and errors that can be encountered during the sequence alignment process. These scoring strategies are designed to consider insertions and deletions through the identification of gaps in the aligned sequences. The Affine gap model is considered the most accurate model for the alignment of biomolecules. However, its application to S-W and N-W algorithms is quite expensive both in terms of computational time as well as in terms of memory requirements when compared to other less demanding models as the Linear gap one. In order to overcome these drawbacks, an optimised version of the Affine gap model called Dynamic Gap Selector (DGS) has been developed. The alignment scores computed using DGS are very similar to those computed using the gold standard Affine gap model. However, the implementation of this novel gap model during the S-W and N-W alignment procedures leads to the reduction of the memory requirements by a factor of 3. Moreover, the DGS model application accounts for a reduction by a factor of 2 in the number of operations required with respect to the standard Affine gap model. isomiR-SEA Project One of the most attractive research fields that is currently investigated by several interdisciplinary research teams is the study of small and medium RNA sequences with regulatory functions on the production of proteins. These RNA molecules are respectively called microRNAs (miRNAs) and long non-coding RNAs (lncRNAs). In the second project, an alignment algorithm specific for miRNAs detection and characterization have been designed and implemented. miRNAs are a class of short RNAs (18-25 bases) that play essential roles in a variety of cellular processes such as development, metabolism, regulation of immunological response and tumor genesis. Several tools have been developed in the last years to align and analyse the huge amount of data coming from the sequencing of short RNA molecules. However, these tools still lack accuracy and completeness because they use general alignment procedures that do not take into account the structural characteristics of miRNA molecules. Moreover, they are not able to detect specific miRNA variants, called isomiRs, that have recently been found to be relevant for miRNA targets regulation. To overcome these limitations, a miRNA-based alignment algorithm has been designed and developed. The isomiR-SEA algorithm is specifically tailored to detect different miRNAs variants (isomiRs) in the RNA-Seq data and to provide users with a detailed picture of the isomiRs spectrum characterizing the sample under investigation. The accuracy proper of the implemented alignment policy is reflected in the precise miRNAs and isomiRs quantification, and in the detailed profiling of miRNAtarget mRNA interaction sites. This information, hidden in raw miRNA sequencing data, can be very useful to properly characterize miRNAs and to adopt them as reliable biomarkers able to describe multifactorial pathologies such as cancer. SNN Partitioning and Placement Project In the Neuromorphic Computing field, SpiNNaker is one of the state-of-the-art massively parallel neuromorphic platform. It is designed to simulate Spiking Neural Networks (SNN) but it is characterized by several bottlenecks in the neuron partitioning and placement phases executed during the simulation configuration. In this activity, related to the European Flagship project Human Brain Project, a top-down methodology has been developed to improve the scalability and reliability of SNN simulations on massively many-core and densely interconnected platforms. In this context, SNNs mimic the brain activity by emulating spikes sent among neurons populations. Many-core platforms are emerging computing resources to achieve real-time SNNs simulations. Neurons are mapped to parallel cores and spikes are sent in the form of packets over the on-chip and off-chip network. However, due to the heterogeneity and complexity of neuron populations activity, achieving an efficient exploitation of platforms resources is a challenge, often impacting simulation reliability and limiting the biological network size. To address this challenge, the proposed methodology makes use of customized SNN configurations capable of extracting detailed profiling information about network usage of on-chip and off-chip resources. Thus, allowing to recognize the bottlenecks in the spike propagation system. These bottlenecks have been then considered during the SNN Partitioning and Placement of a graph describing the SNN interconnection on chips and cores available on the SpiNNaker board. The advantages of the proposed SNN Partitioning and Placement applied to the SpiNNaker has been evaluated in terms of traffic reduction and consequent simulation reliability. The results demonstrate that it is possible to consistently reduce packet traffic and improve simulation reliability by means of an effective neuron placement.
APA, Harvard, Vancouver, ISO, and other styles
3

Trifunovic, Aleksandar. "Parallel algorithms for hypergraph partitioning." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.430537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aslan, Burak Galip Püskülcü Halis. "Heuristic container placement algorithms/." [s.l.]: [s.n.], 2003. http://library.iyte.edu.tr/tezler/master/bilgisayaryazilimi/T000268.rar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bahoshy, Nimatallah M. "Parallelization of algorithms by explicit partitioning." Thesis, Loughborough University, 1992. https://dspace.lboro.ac.uk/2134/27004.

Full text
Abstract:
In order to utilize parallel computers, four approaches, broadly speaking, to the provision of parallel software have been followed: (1) automatic production of parallel code by parallelizing—compilers which act on sequential programs written in existing languages; (2) "add on" features to existing languages that enable the programmer to make use of the parallel computer—these are specific to each machine; (3) full-blown parallel languages—these could be completely new languages, but usually they are derived from existing languages; (4) the provision of tools to aid the programmer in the detection of inherent parallelism in a given algorithm and in the design and implementation of parallel programs.
APA, Harvard, Vancouver, ISO, and other styles
6

Zanetti, Luca. "Algorithms for partitioning well-clustered graphs." Thesis, University of Bristol, 2018. http://hdl.handle.net/1983/e6ba8929-6488-4277-b91b-4f4f7eda2b26.

Full text
Abstract:
Graphs occurring in the real world usually exhibit a high level of order and organisation: higher concentration of edges within the same group of vertices, and lower concentration among different groups. A common way to analyse these graphs is to partition the vertex set of a graph into clusters according to some connectivity measure. Graph clustering has been widely applied to many fields of computer science, from machine learning to bioinformatics and social network analysis. The focus of this thesis is to design and analyse algorithms for partitioning graphs presenting a strong cluster-structure, which we call well-clustered. We first study the spectral properties of the Laplacian matrix of such graphs, and prove a structure theorem that relates the eigenvectors corresponding to the smallest eigenvalues of the Laplacian matrix of a graph to the structure of its clusters. We then harness this theorem to analyse Spectral Clustering, arguably the most popular graph clustering algorithm. We give for the first time approximation guarantees on the number of misclassified vertices by Spectral Clustering when applied to well-clustered graphs. Since Spectral Clustering needs to compute as many eigenvectors of the Laplacian matrix as the number of clusters in the graph, its performance deteriorates as this number grows. We present an algorithm that overcomes this issue without compromising its accuracy. This algorithm runs in time nearly linear in the number of the edges and independently of the number of clusters in the input graph. Finally, we tackle the problem of partitioning a graph whose description is distributed among many sites. We present a distributed algorithm that works in a few synchronous rounds, requires limited communication complexity, and achieves the same guarantees of Spectral Clustering as long as the clusters are balanced in size.
APA, Harvard, Vancouver, ISO, and other styles
7

MUPPIDI, SRINIVAS REDDY. "GENETIC ALGORITHMS FOR MULTI-OBJECTIVE PARTITIONING." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1080827924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vijaya, Satya Ravi. "ALGORITHMS FOR HAPLOTYPE INFERENCE AND BLOCK PARTITIONING." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2490.

Full text
Abstract:
The completion of the human genome project in 2003 paved the way for studies to better understand and catalog variation in the human genome. The International HapMap Project was started in 2002 with the aim of identifying genetic variation in the human genome and studying the distribution of genetic variation across populations of individuals. The information collected by the HapMap project will enable researchers in associating genetic variations with phenotypic variations. Single Nucleotide Polymorphisms (SNPs) are loci in the genome where two individuals differ in a single base. It is estimated that there are approximately ten million SNPs in the human genome. These ten million SNPS are not completely independent of each other - blocks (contiguous regions) of neighboring SNPs on the same chromosome are inherited together. The pattern of SNPs on a block of the chromosome is called a haplotype. Each block might contain a large number of SNPs, but a small subset of these SNPs are sufficient to uniquely dentify each haplotype in the block. The haplotype map or HapMap is a map of these haplotype blocks. Haplotypes, rather than individual SNP alleles are expected to effect a disease phenotype. The human genome is diploid, meaning that in each cell there are two copies of each chromosome - i.e., each individual has two haplotypes in any region of the chromosome. With the current technology, the cost associated with empirically collecting haplotype data is prohibitively expensive. Therefore, the un-ordered bi-allelic genotype data is collected experimentally. The genotype data gives the two alleles in each SNP locus in an individual, but does not give information about which allele is on which copy of the chromosome. This necessitates computational techniques for inferring haplotypes from genotype data. This computational problem is called the haplotype inference problem. Many statistical approaches have been developed for the haplotype inference problem. Some of these statistical methods have been shown to be reasonably accurate on real genotype data. However, these techniques are very computation-intensive. With the international HapMap project collecting information from nearly 10 million SNPs, and with association studies involving thousands of individuals being undertaken, there is a need for more efficient methods for haplotype inference. This dissertation is an effort to develop efficient perfect phylogeny based combinatorial algorithms for haplotype inference. The perfect phylogeny haplotyping (PPH) problem is to derive a set of haplotypes for a given set of genotypes with the condition that the haplotypes describe a perfect phylogeny. The perfect phylogeny approach to haplotype inference is applicable to the human genome due to the block structure of the human genome. An important contribution of this dissertation is an optimal O(nm) time algorithm for the PPH problem, where n is the number of genotypes and m is the number of SNPs involved. The complexity of the earlier algorithms for this problem was O(nm^2). The O(nm) complexity was achieved by applying some transformations on the input data and by making use of the FlexTree data structure that has been developed as part of this dissertation work, which represents all the possible PPH solution for a given set of genotypes. Real genotype data does not always admit a perfect phylogeny, even within a block of the human genome. Therefore, it is necessary to extend the perfect phylogeny approach to accommodate deviations from perfect phylogeny. Deviations from perfect phylogeny might occur because of recombination events and repeated or back mutations (also referred to as homoplasy events). Another contribution of this dissertation is a set of fixed-parameter tractable algorithms for constructing near-perfect phylogenies with homoplasy events. For the problem of constructing a near perfect phylogeny with q homoplasy events, the algorithm presented here takes O(nm^2+m^(n+m)) time. Empirical analysis on simulated data shows that this algorithm produces more accurate results than PHASE (a popular haplotype inference program), while being approximately 1000 times faster than phase. Another important problem while dealing real genotype or haplotype data is the presence of missing entries. The Incomplete Perfect Phylogeny (IPP) problem is to construct a perfect phylogeny on a set of haplotypes with missing entries. The Incomplete Perfect Phylogeny Haplotyping (IPPH) problem is to construct a perfect phylogeny on a set of genotypes with missing entries. Both the IPP and IPPH problems have been shown to be NP-hard. The earlier approaches for both of these problems dealt with restricted versions of the problem, where the root is either available or can be trivially re-constructed from the data, or certain assumptions were made about the data. We make some novel observations about these problems, and present efficient algorithms for unrestricted versions of these problems. The algorithms have worst-case exponential time complexity, but have been shown to be very fast on practical instances of the problem.
Ph.D.
Other
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
9

Martin, Nicolas. "Network partitioning algorithms with scale-free objective." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT001.

Full text
Abstract:
En raison de la complexité inhérente à l’analyse de réseau de très grande taille, l’élaboration d’algorithmes de partitionnement et diverses problématiques connexes sont traitées au long de cette thèse. Dans un premier temps, une question préliminaire est traitée: puisque les nœuds au sein d’une partie ne sont pas nécessairement connexes, comment quantifier l’impact d’une contrainte de connexité ? Nous proposons ensuite un algorithme de partitionnement assurant que le réseau réduit soit scale-free. Ceci permet de tirer profit des propriétés intrinsèques de ce type de réseaux. Nous nous intéressons également aux propriétés à préserver pour respecter la nature physique et dynamique du réseau initial. Dans une troisième partie, nous proposons une méthode pour identifier les nœuds à mesurer dans un réseau pour garantir une reconstruction efficace de la valeur moyenne des autre nœuds. Finalement, nous proposons trois applications: la première concerne le trafic routier et nous montrons que notre premier algorithme de partitionnement permet d’obtenir un réseau réduit émulant efficacement le réseau initial. Les deux autres applications concernent les réseaux d’épidémiologie. Dans la première nous montrons qu’un réseau réduit scale-free permet de construire une stratégie efficace d’attribution de soin au sein d’une population. Dans la dernière application, nous tirons profit des résultats sur la reconstruction de moyenne pour estimer l’évolution d’une épidémie dans un réseau de grande taille
In light of the complexity induced by large-scale networks, the design of network partitioning algorithms and related problematics are at the heart of this thesis. First, we raise a preliminary question on the structure of the partition itself: as the parts may includes disconnected nodes, we want to quantify the drawbacks to impose the nodes inside each part to be connected. Then we study the design of a partitioning algorithm inducing a reduced scale-free network. This allows to take advantage of the inherent features of this type of network. We also focus on the properties to preserve to respect the physical and dynamical profile of the initial network. We investigate then how to partition a network between measured and unmeasured nodes ensuring that the average of the unmeasured nodes can be efficiently reconstructed. In particular we show that, under hypothesis, this problem can be reduced to a problem of detection of subgraph with particular properties. Methods to achieve this detection are proposed. Finally, three applications are presented: first we apply the partitioning algorithm inducing scale-freeness to a large-scale urban traffic network. We show then that, thanks to the properties preserved through the partition, the reduced network can be used as an abstraction of the initial network. The second and third applications deal with network epidemics. First, we show that the scale-freeness of the abstracting network can be used to build a cure-assignation strategy. In the last application, we take advantage of the result on average reconstruction to estimate the evolution of a disease on a large-scale network
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Huiqun. "Circuit partitioning algorithms for CAD VLSI design /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Partitioning and placement algorithms"

1

1934-, Zobrist George W., ed. Routing, placement, and partitioning. Norwood, N.J: Ablex, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Center, Langley Research, ed. Approximate algorithms for partitioning and assignment problems. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walshaw, C. Parallel optimisation algorithms for multilevel mesh partitioning. London: CMS Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

S, Lueker George, ed. Probabilistic analysis of packing and partitioning algorithms. New York: Wiley, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

S, Agaian S., ed. Multidimensional discrete unitary transforms: Representation, partitioning, and algorithms. New York: Marcel Dekker, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schwartz, Victor Scott. Dynamic platform-independent meta-algorithms for graph-partitioning. Monterey, Calif: Naval Postgraduate School, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

H, Bohkari Shahid, and Langley Research Center, eds. Efficient algorithms for a class of partitioning problems. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. Parallel algorithms for placement and routing in VLSI design. Urbana, Ill: [University of Illinois at Urbana-Champaign, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tang, Xiaowei. Thre e extensions to force-directed placement for general graphs. Dublin: University College Dublin, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Darzi, Elia. Methods for solving the set covering and set partitioning problems using graph theoretic (relaxation) algorithms. Uxbridge: Brunel University, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Partitioning and placement algorithms"

1

Pushpa, J., and Pethuru Raj Chelliah. "Expounding k-means-inspired network partitioning algorithm for SDN Controller Placement." In Applied Learning Algorithms for Intelligent IoT, 265–90. Boca Raton: Auerbach Publications, 2021. http://dx.doi.org/10.1201/9781003119838-12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kennings, Andrew A., and Igor L. Markov. "Circuit Placement." In Encyclopedia of Algorithms, 301–6. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-2864-4_69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kennings, Andrew A., and Igor L.Markov. "Circuit Placement." In Encyclopedia of Algorithms, 1–7. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-3-642-27848-8_69-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kennings, Andrew A., and Igor L. Markov. "Circuit Placement." In Encyclopedia of Algorithms, 143–46. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-30162-4_69.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sherwani, Naveed A. "Partitioning." In Algorithms for VLSI Physical Design Automation, 125–58. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4757-2219-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sherwani, Naveed. "Partitioning." In Algorithms for VLSI Physical Design Automation, 141–74. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-1-4615-2351-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Angsheng, and Peng Zhang. "Unbalanced Graph Partitioning." In Algorithms and Computation, 218–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17517-6_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weiss, William. "Partitioning Topological Spaces." In Algorithms and Combinatorics, 154–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-72905-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Osipov, Vitaly, Peter Sanders, and Christian Schulz. "Engineering Graph Partitioning Algorithms." In Experimental Algorithms, 18–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-30850-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sriram, M., and S. M. Kang. "System Partitioning and Chip Placement." In Physical Design for Multichip Modules, 69–97. Boston, MA: Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2682-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Partitioning and placement algorithms"

1

Li, Jianhua, Laleh Behjat, and Logan Rakai. "Clustering algorithms for circuit partitioning and placement problems." In 2007 European Conference on Circuit Theory and Design (ECCTD 2007). IEEE, 2007. http://dx.doi.org/10.1109/ecctd.2007.4529654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cong, Jason, Michail Romesis, and Min Xie. "Optimality, scalability and stability study of partitioning and placement algorithms." In the 2003 international symposium. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/640000.640021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bazylevych, Roman, and Lubov Bazylevych. "The methodology and algorithms for solving the very large-scale physical design automation problems: Partitioning, packaging, placement and routing." In 2013 2nd Mediterranean Conference on Embedded Computing (MECO). IEEE, 2013. http://dx.doi.org/10.1109/meco.2013.6601386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yao, Wenbin, Zhen Guo, and Dongbin Wang. "An Energy Efficient Virtual Machine Placement Algorithm Based on Graph Partitioning in Cloud Data Center." In 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC). IEEE, 2017. http://dx.doi.org/10.1109/ispa/iucc.2017.00066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Zhiyuan, and Ankur Srivastava. "Co-Placement for Pin-Fin Based Micro-Fluidically Cooled 3D ICs." In ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/ipack2015-48354.

Full text
Abstract:
3D ICs with through-silicon vias (TSVs) can achieve high performance while exacerbating the problem of heat removal. This necessitates the use of more aggressive cooling solutions such as micropin-fin based fluidic cooling. However, micropin-fin cooling comes with overheads such as non-uniform cooling capacity along the flow direction and restriction on the position of TSVs to where pins exist. 3D gate and TSV placement approaches un-aware of these drawbacks may lead to detrimental effects and even infeasible chip design. In this paper, we present a hierarchical partitioning based algorithm for co-placing gates and TSVs to co-optimize the wire-length and in-layer temperature uniformity, given the logical level netlist and layer assignment of gates. Compared to the wire-length driven gate placement followed by a TSV legalization stage, our approach can achieve up to 75% and 25% reduction of in-layer temperature variation and peak temperature, respectively, with the cost of 13% increase in wire-length.
APA, Harvard, Vancouver, ISO, and other styles
6

AL-Qahtani, Ghazi D., and Noah Berlow. "Large Scale Placement For Multilateral Wells Using Network Optimization." In SPE Middle East Oil & Gas Show and Conference. SPE, 2021. http://dx.doi.org/10.2118/204803-ms.

Full text
Abstract:
Abstract Multilateral wells are an evolution of horizontal wells in which several wellbore branches radiate from the main borehole. In the last two decades, multilateral wells have been increasingly utilized in producing hydrocarbon reservoirs. The main advantage of using such technology against conventional and single-bore wells comes from the additional access to reservoir rock by maximizing the reservoir contact with fewer resources. Today, multilateral wells are rapidly becoming more complex in both designs and architecture (i.e., extended reach wells, maximum reservoir contact, and extreme reservoir contact wells). Certain multilateral design templates prevail in the industry, such as fork and fishbone types, which tend to be populated throughout the reservoir of interest with no significant changes to the original architecture and, therefore, may not fully realize the reservoir's potential. Placement of optimal multilateral wells is a multivariable problem, which is a function of determining the best well locations and trajectories in a hydrocarbon reservoir with the ultimate objectives of maximizing productivity and recovery. The placement of the multilateral wells can be subject to many constraints such as the number of wells required, maximum length limits, and overall economics. This paper introduces a novel technology for placement of multilateral wells in hydrocarbon reservoirs utilizing a transshipment network optimization approach. This method generates scenarios of multiple wells with different designs honoring the most favorable completion points in a reservoir. In addition, the algorithm was developed to find the most favorable locations and trajectories for the multilateral wells in both local and global terms. A partitioning algorithm is uniquely utilized to reduce the computational cost of the process. The proposed method will not only create different multilateral designs; it will justify the trajectories of every borehole section generated. The innovative method is capable of constructing hundreds of multilateral wells with design variations in large-scale reservoirs. As the complexity of the reservoirs (e.g., active forces that influence fluid mobility) and heterogeneity dictate variability in performance at different area of the reservoir, multilateral wells should be constructed to capture the most productive zones. The new method also allows different levels of branching for the laterals (i.e., laterals can emanate from the motherbore, from other laterals or from subsequent branches). These features set the stage for a new generation of multilateral wells to achieve the most effective reservoir contact.
APA, Harvard, Vancouver, ISO, and other styles
7

Verplaetse, P., J. Dambre, D. Stroobandt, and J. Van Campenhout. "On partitioning vs. placement rent properties." In the 2001 international workshop. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/368640.368665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ravichandran, Ramprasad, Mike Niemier, and Sung Kyu Lim. "Partitioning and placement for buildable QCA circuits." In the 2005 conference. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1120725.1120902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Can Yildiz, Mehmet, and Patrick H. Madden. "Improved cut sequences for partitioning based placement." In the 38th conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/378239.379064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Madden, P. "Session details: Session 4: Partitioning & Placement." In ISPD03: International Symposium on Physical Design. New York, NY, USA: ACM, 2003. http://dx.doi.org/10.1145/3248312.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Partitioning and placement algorithms"

1

Abou-rjeili, Amine, and George Karypis. Multilevel Algorithms for Partitioning Power-Law Graphs. Fort Belvoir, VA: Defense Technical Information Center, October 2005. http://dx.doi.org/10.21236/ada439402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Selvakkumaran, Navaratnasothie, Abhishek Ranjan, Salil Raje, and George Karypis. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources. Fort Belvoir, VA: Defense Technical Information Center, September 2004. http://dx.doi.org/10.21236/ada439474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Moulitsas, Irene, and George Karypis. Partitioning Algorithms for Simultaneously Balancing Iterative and Direct Methods. Fort Belvoir, VA: Defense Technical Information Center, March 2004. http://dx.doi.org/10.21236/ada439418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kashyap, Abhishek, Samir Khuller, and Mark Shayman. Relay Placement Approximation Algorithms for k-Connectivity in Wireless Sensor Networks. Fort Belvoir, VA: Defense Technical Information Center, January 2006. http://dx.doi.org/10.21236/ada455438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Selvakkumaran, Navaratnasothie, and George Karypis. Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization. Fort Belvoir, VA: Defense Technical Information Center, September 2004. http://dx.doi.org/10.21236/ada439471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Selvakkumaran, Navaratnasothie, and George Karypis. Multi-Objective Hypergraph Partitioning Algorithms for Cut and Maximum Subdomain Degree Minimization. Fort Belvoir, VA: Defense Technical Information Center, April 2003. http://dx.doi.org/10.21236/ada439577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ling, Hao. Application of Model-Based Signal Processing and Genetic Algorithms for Shipboard Antenna Design, Placement Optimization. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada399555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Banks, Jeffrey. CARPE DIEM: Coupled Algorithms for Robust Partitioning of Equations for the Dynamic Interactions of Evolving Materials. Office of Scientific and Technical Information (OSTI), November 2021. http://dx.doi.org/10.2172/1829714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Idakwo, Gabriel, Sundar Thangapandian, Joseph Luttrell, Zhaoxian Zhou, Chaoyang Zhang, and Ping Gong. Deep learning-based structure-activity relationship modeling for multi-category toxicity classification : a case study of 10K Tox21 chemicals with high-throughput cell-based androgen receptor bioassay data. Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41302.

Full text
Abstract:
Deep learning (DL) has attracted the attention of computational toxicologists as it offers a potentially greater power for in silico predictive toxicology than existing shallow learning algorithms. However, contradicting reports have been documented. To further explore the advantages of DL over shallow learning, we conducted this case study using two cell-based androgen receptor (AR) activity datasets with 10K chemicals generated from the Tox21 program. A nested double-loop cross-validation approach was adopted along with a stratified sampling strategy for partitioning chemicals of multiple AR activity classes (i.e., agonist, antagonist, inactive, and inconclusive) at the same distribution rates amongst the training, validation and test subsets. Deep neural networks (DNN) and random forest (RF), representing deep and shallow learning algorithms, respectively, were chosen to carry out structure-activity relationship-based chemical toxicity prediction. Results suggest that DNN significantly outperformed RF (p < 0.001, ANOVA) by 22–27% for four metrics (precision, recall, F-measure, and AUPRC) and by 11% for another (AUROC). Further in-depth analyses of chemical scaffolding shed insights on structural alerts for AR agonists/antagonists and inactive/inconclusive compounds, which may aid in future drug discovery and improvement of toxicity prediction modeling.
APA, Harvard, Vancouver, ISO, and other styles
10

Sinclair, Samantha, and Sandra LeGrand. Reproducibility assessment and uncertainty quantification in subjective dust source mapping. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41523.

Full text
Abstract:
Accurate dust-source characterizations are critical for effectively modeling dust storms. A previous study developed an approach to manually map dust plume-head point sources in a geographic information system (GIS) framework using Moderate Resolution Imaging Spectroradiometer (MODIS) imagery processed through dust-enhancement algorithms. With this technique, the location of a dust source is digitized and recorded if an analyst observes an unobscured plume head in the imagery. Because airborne dust must be sufficiently elevated for overland dust-enhancement algorithms to work, this technique may include up to 10 km in digitized dust-source location error due to downwind advection. However, the potential for error in this method due to analyst subjectivity has never been formally quantified. In this study, we evaluate a version of the methodology adapted to better enable reproducibility assessments amongst multiple analysts to determine the role of analyst subjectivity on recorded dust source location error. Four analysts individually mapped dust plumes in Southwest Asia and Northwest Africa using five years of MODIS imagery collected from 15 May to 31 August. A plume-source location is considered reproducible if the maximum distance between the analyst point-source markers for a single plume is ≤10 km. Results suggest analyst marker placement is reproducible; however, additional analyst subjectivity-induced error (7 km determined in this study) should be considered to fully characterize locational uncertainty. Additionally, most of the identified plume heads (> 90%) were not marked by all participating analysts, which indicates dust source maps generated using this technique may differ substantially between users.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography