Дисертації з теми "Greedy heuristic algorithms"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Greedy heuristic algorithms.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-15 дисертацій для дослідження на тему "Greedy heuristic algorithms".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Mathirajan, M. "Heuristic Scheduling Algorithms For Parallel Heterogeneous Batch Processors." Thesis, Indian Institute of Science, 2000. http://hdl.handle.net/2005/196.

Повний текст джерела
Анотація:
In the last decade, market pressures for greater variety of products forced a gradual shift from continuous manufacturing to batch manufacturing in various industries. Consequently batch scheduling problems have attracted the attention of researchers in production and operations management. This thesis addresses the scheduling of parallel non-identical batch processors in the presence of dynamic job arrivals, incompatible job-families and non-identical job sizes. This problem abstracts the scheduling of heat-treatment furnace operations of castings in a steel foundry. The problem is of considerable interest in this sector as a large proportion of the total production time is spent in heat treatment processing. This problem is also encountered in other industrial settings such as burn-in operation in the final testing stage of semiconductor manufacturing, and manufacturing of steel, ceramics, aircraft parts, footwear, etc. A detailed literature review and personal communications with experts revealed that this class of batch scheduling problems have not been addressed hitherto. A major concern in the management of foundries is to maximize throughput and reduce flow time and work-in-process inventories. Therefore we have chosen the primary scheduling objective to be the utilization of batch processors and as secondary objectives the minimization of overall flow time and weighted average waiting time per job. This formulation can be considered as an extension of problems studied by DOBSON AND NAMBINADOM (1992), UZSOY (1995), ZEE et a/. (1997) and MEHTA AND UZSOY (1998). Our effort to carefully catalogue the large number of variants of deterministic batch scheduling problems led us to the development of a taxonomy and notation. Not surprisingly, we are able to show that our problem is NP-hard and is therefore in the company of many scheduling problems that are difficult to solve. Initially two heuristic algorithms, one a mathematical programming based heuristic algorithm (MPHA) and the other a greedy heuristic algorithm were developed. Due to the computational overheads in the implementation of MPHA when compared with the greedy heuristic, we chose to focus on the latter as the primary scheduling methodology. Preliminary experimentation led us to the observation that the performance of greedy heuristics depends critically on the selection of job-families. So eight variants of the greedy heuristic that differ mainly in the decision on "job-family selection" were proposed. These eight heuristics are basically two sets {Al, A2, A3, A4} and the modified (MAI, MA2, MA3, MA4}, which differ only on how the "job-family" index, weighted shortest processing time, is computed. For evaluating the performance of the eight heuristics, computational experiments were carried out. The analysis of the experimental data is presented in two perspectives. The goal of the first perspective was to evaluate the absolute quality of the solutions obtained by the proposed heuristic algorithms when compared with estimated optimal solutions. The second perspective was to compare the relative performance of the proposed heuristics. The test problems generated were designed to reflect real-world scheduling problems that we have observed in the steel-casting industry. Three important problem parameters for the test set generation are the number of jobs [n], job-priority [P], and job-family [F]. We considered 5 different levels for n, 2 different levels for P and 2 different levels for F. The test set reflects (i) the size of the jobs vary uniformly (ii) there are two batch processors and (iii) five incompatible job-families with different processing times. 15 problem instances were generated for each level of (n, P, and F). Out of many procedures available in the literature for estimating optimal value for combinatorial optimization problems, we used the procedure based on Weibull distribution as discussed in Rardin and Uzsoy (2001). For each problem instance of the randomly generated 300 problem instances, 15 feasible solutions (i.e., the average utilization of batch processors (AUBP)) were obtained using "random decision rule for first two stages and using a "best-fit heuristic' for the last stage of the scheduling problem. These 15 feasible solutions were used to estimate the optimal value. The generated 15 feasible solutions are expected to provide the estimated optimal value of the problem instance with a very high probability. Both average performance and worst-case performance of the heuristics indicated that, the heuristic algorithms A3 and A4, on the average yielded better utilization than the estimated optimal value. This indicates that the Weibull-based technique may have yielded conservative estimates of the optimal value. Further, the other heuristic algorithms found inferior solutions when compared with the estimated optimal value. But the deviations were very small. From this, we may infer that all the proposed heuristic algorithms are acceptable. The relative evaluation of heuristics was in terms of both computational effort and the quality of the solution. For the heuristics, it was clear that the computational burden is low enough on the average to run all the proposed heuristics on each problem instance and select the best solution. Also, it is observed that any algorithm from the first set of {Al, A2, A3, and A4} takes more computational time than any one from the second set {MAI, MA2, MA3, and MA4}. Regarding solution quality, the following inferences were made: ٭ In general the heuristic algorithms are sensitive to the choice of problem factors with respect to all the scheduling objectives. ٭ The three algorithms A3, MA4 and MAI are observed to be superior with respect to the scheduling objectives: maximizing average utilization of batch processors (AUBP), minimization of overall flow time (OFT) and minimizing weighted average waiting time (WAWT) respectively. Further, the heuristic algorithm MAI turns out to be the best choice if we trade-off all three objectives AUBP, OFT and WAWT. Finally we carried out simple sensitivity analyses experiments in order to understand the influence of some parameters of the scheduling on the performance of the heuristic algorithms. These were related to one at a time changes in (1) job-size distribution, (2) capacities of batch processors and (3) processing time of job-families. From the analyses it appears that there is an influence of changes in these input parameters. The results of the sensitivity analyses can be used to guide the selection of a heuristic for a particular combination of input parameters. For example, if we have to pick a single heuristic algorithm, then MAI is the best choice when considering the performance and the robustness indicated by the sensitivity analysis. In summary, this thesis examined a problem arising in the scheduling of heat-treatment operations in the steel-casting industry. This problem was abstracted to a class of deterministic batch scheduling problems. We analyzed the computational complexity of this problem and showed that it is NP-hard and therefore unlikely to admit a scalable exact method. Eight variants of a fast greedy heuristic were designed to solve the scheduling problem of interest. Extensive computational experiments were carried out to compare the performance of the heuristics with estimated optimal values (using the Weibull technique) and also for relative effectiveness and this showed that the heuristics are capable of consistently obtaining near-estimated) optimal solutions with very low computational burden for the solution of large scale problems. Finally, a comprehensive sensitivity analysis was carried out to study the influence of a few parameters, by changing them one at a time, on the performance of the heuristic algorithms. This type of analysis gives users some confidence in the robustness of the proposed heuristics.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Turlapaty, Sandhya. "Implementation and Performance Comparison of Some Heuristic Algorithms for Block Sorting." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/816.

Повний текст джерела
Анотація:
An implementation framework has been developed in this thesis for a well-known APX-hard combinatorial optimization problem known as Block Sorting. The motivation for the study of this problem comes from applications such as computational biology and optical character recognition. While existing Block Sorting research has been theoretically focused on the development and analysis of several approximation algorithms for Block Sorting, little or no work has been carried out thus far on the implementation of the proposed approximation algorithms. The conceptualization of an implementation framework and illustrating its use by experimenting with the existing approximation algorithms will provide means for discovering newer approaches to handling this important problem. As the main contribution, the research in this thesis provides a new greedy algorithm for Block Sorting in which each block move either reduces the number of blocks by two or three blocks, or reduces by one the number of reversals or inversions in the orig- inal permutation. Experimental results for all algorithms are also provided along with a comparison of their performance using the number of block moves and approximation ratios as performance metrics when sorting permutations of a given order, and as the order of permutations is varied. Preliminary results from the experimentation were shared at the 2017 IEEE 17th International Conference on Bioinformatics and Bioengineering (BIBE) [1]. To the best of our knowledge, this is the first work that has been focused on the implementation and experimental performance analysis of any algorithm for Block Sorting. We believe the results presented in this thesis will be useful for researchers and practitioners working in this area.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Neas, Charles Bennett. "A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles." Thesis, Virginia Tech, 2010. http://hdl.handle.net/10919/36213.

Повний текст джерела
Анотація:
This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search techniques with hill climbing and effective backtracking to guide the search process rapidly towards the goal. Using simulations of a four-thruster hovercraft, AD-Lib is compared to existing suboptimal search algorithms in both known and unknown environments with static obstacles. AD-Lib is shown to be faster than existing techniques, at the expense of increased path cost. The motion planning strategy of AD-Lib along with a switching controller is also tested in an environment with dynamic obstacles.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
4

TAKADA, Hiroaki, Hiroyuki TOMIYAMA, Gang ZENG, and Tetsuo YOKOYAMA. "Static Task Scheduling Algorithms Based on Greedy Heuristics for Battery-Powered DVS Systems." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/15037.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Hossain, Mohammad Forhad. "Spanning Tree Approach On The Snow Cleaning Problem." Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-4847.

Повний текст джерела
Анотація:
Snow cleaning is one of the important tasks in the winter time in Sweden. Every year government spends huge amount money for snow cleaning purpose. In this thesis we generate a shortest road network of the city and put the depots in different place of the city for snow cleaning. We generate shortest road network using minimum spanning tree algorithm and find the depots position using greedy heuristic. When snow is falling, vehicles start work from the depots and clean the snow all the road network of the city. We generate two types of model. Models are economic model and efficient model. Economic model provide good economical solution of the problem and it use less number of vehicles. Efficient model generate good efficient solution and it take less amount of time to clean the entire road network.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Althoby, Haeder Younis Ghawi. "Theoritical and numerical studies on the graph partitioning problem." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC233/document.

Повний текст джерела
Анотація:
Étant donné G = (V, E) un graphe non orienté connexe et un entier positif β (n), où n est le nombrede sommets de G, le problème du séparateur (VSP) consiste à trouver une partition de V en troisclasses A, B et C de sorte qu'il n'y a pas d'arêtes entre A et B, max {| A |, | B |} est inférieur ou égal àβ (n) et | C | est minimum. Dans cette thèse, nous considérons une modélisation du problème sous laforme d'un programme linéaire en nombres entiers. Nous décrivons certaines inégalités valides et etdéveloppons des algorithmes basés sur un schéma de voisinage.Nous étudions également le problème du st-séparateur connexe. Soient s et t deux sommets de Vnon adjacents. Un st-séparateur connexe dans le graphe G est un sous-ensemble S de V \ {s, t} quiinduit un sous-graphe connexe et dont la suppression déconnecte s de t. Il s'agit de déterminer un stséparateur de cardinalité minimum. Nous proposons trois formulations pour ce problème et donnonsdes inégalités valides du polyèdre associé à ce problème. Nous présentons aussi une heuristiqueefficace pour résoudre ce problème
Given G=(V,E) a connected undirected graph and a positive integer β(n), where n is number ofvertices, the vertex separator problem (VSP) is to find a partition of V into three classes A,B and Csuch that there is no edge between A and B, max{|A|,|B|}less than or equal β(n), and |C| isminimum. In this thesis, we consider aninteger programming formulation for this problem. Wedescribe some valid inequalties and using these results to develop algorithms based onneighborhood scheme.We also study st-connected vertex separator problem. Let s and tbe two disjoint vertices of V, notadjacent. A st-connected separator in the graph G is a subset S of V\{s,t} such that there are no morepaths between sand tin G[G\S] and the graph G[S] is connected . The st-connected vertex speratorproblem consists in finding such subset with minimum cardinality. We propose three formulationsfor this problem and give some valid inequalities for the polyhedron associated with this problem.We develop also an efficient heuristic to solve this problem
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Berger, Karl-Eduard. "Placement de graphes de tâches de grande taille sur architectures massivement multicoeurs." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLV026/document.

Повний текст джерела
Анотація:
Ce travail de thèse de doctorat est dédié à l'étude d'un problème de placement de tâches dans le domaine de la compilation d'applications pour des architectures massivement parallèles. Ce problème vient en réponse à certains besoins industriels tels que l'économie d'énergie, la demande de performances pour les applications de type flots de données synchrones. Ce problème de placement doit être résolu dans le respect de trois critères: les algorithmes doivent être capable de traiter des applications de tailles variables, ils doivent répondre aux contraintes de capacités des processeurs et prendre en compte la topologie des architectures cibles. Dans cette thèse, les tâches sont organisées en réseaux de communication, modélisés sous forme de graphes. Pour évaluer la qualité des solutions produites par les algorithmes, les placements obtenus sont comparés avec un placement aléatoire. Cette comparaison sert de métrique d'évaluation des placements des différentes méthodes proposées. Afin de résoudre à ce problème, deux algorithmes de placement de réseaux de tâches de grande taille sur des architectures clusterisées de processeurs de type many-coeurs ont été développés. Ils s'appliquent dans des cas où les poids des tâches et des arêtes sont unitaires. Le premier algorithme, nommé Task-wise Placement, place les tâches une par une en se servant d'une notion d'affinité entre les tâches. Le second, intitulé Subgraph-wise Placement, rassemble les tâches en groupes puis place les groupes de tâches sur les processeurs en se servant d'une relation d'affinité entre les groupes et les tâches déjà affectées. Ces algorithmes ont été testés sur des graphes, représentants des applications, possédant des topologies de types grilles ou de réseaux de portes logiques. Les résultats des placements sont comparés avec un algorithme de placement, présent dans la littérature qui place des graphes de tailles modérée et ce à l'aide de la métrique définie précédemment. Les cas d'application des algorithmes de placement sont ensuite orientés vers des graphes dans lesquels les poids des tâches et des arêtes sont variables similairement aux valeurs qu'on peut retrouver dans des cas industriels. Une heuristique de construction progressive basée sur la théorie des jeux a été développée. Cet algorithme, nommé Regret Based Approach, place les tâches une par une. Le coût de placement de chaque tâche en fonction des autres tâches déjà placées est calculée. La phase de sélection de la tâche se base sur une notion de regret présente dans la théorie des jeux. La tâche qu'on regrettera le plus de ne pas avoir placée est déterminée et placée en priorité. Afin de vérifier la robustesse de l'algorithme, différents types de graphes de tâches (grilles, logic gate networks, series-parallèles, aléatoires, matrices creuses) de tailles variables ont été générés. Les poids des tâches et des arêtes ont été générés aléatoirement en utilisant une loi bimodale paramétrée de manière à obtenir des valeurs similaires à celles des applications industrielles. Les résultats de l'algorithme ont également été comparés avec l'algorithme Task-Wise Placement, qui a été spécialement adapté pour les valeurs non unitaires. Les résultats sont également évalués en utilisant la métrique de placement aléatoire
This Ph.D thesis is devoted to the study of the mapping problem related to massively parallel embedded architectures. This problem arises from industrial needs like energy savings, performance demands for synchronous dataflow applications. This problem has to be solved considering three criteria: heuristics should be able to deal with applications with various sizes, they must meet the constraints of capacities of processors and they have to take into account the target architecture topologies. In this thesis, tasks are organized in communication networks, modeled as graphs. In order to determine a way of evaluating the efficiency of the developed heuristics, mappings, obtained by the heuristics, are compared to a random mapping. This comparison is used as an evaluation metric throughout this thesis. The existence of this metric is motivated by the fact that no comparative heuristics can be found in the literature at the time of writing of this thesis. In order to address this problem, two heuristics are proposed. They are able to solve a dataflow process network mapping problem, where a network of communicating tasks is placed into a set of processors with limited resource capacities, while minimizing the overall communication bandwidth between processors. They are applied on task graphs where weights of tasks and edges are unitary set. The first heuristic, denoted as Task-wise Placement, places tasks one after another using a notion of task affinities. The second algorithm, named Subgraph-wise Placement, gathers tasks in small groups then place the different groups on processors using a notion of affinities between groups and processors. These algorithms are tested on tasks graphs with grid or logic gates network topologies. Obtained results are then compared to an algorithm present in the literature. This algorithm maps task graphs with moderated size on massively parallel architectures. In addition, the random based mapping metric is used in order to evaluate results of both heuristics. Then, in a will to address problems that can be found in industrial cases, application cases are widen to tasks graphs with tasks and edges weights values similar to those that can be found in the industry. A progressive construction heuristic named Regret Based Approach, based on game theory, is proposed. This heuristic maps tasks one after another. The costs of mapping tasks according to already mapped tasks are computed. The process of task selection is based on a notion of regret, present in game theory. The task with the highest value of regret for not placing it, is pointed out and is placed in priority. In order to check the strength of the algorithm, many types of task graphs (grids, logic gates networks, series-parallel, random, sparse matrices) with various size are generated. Tasks and edges weights are randomly chosen using a bimodal law parameterized in order to have similar values than industrial applications. Obtained results are compared to the Task Wise placement, especially adapted for non-unitary values. Moreover, results are evaluated using the metric defined above
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lundquist, Josefin, and Linnéa O'Hara. "An optimization model using the Assignment Problem to manage the location of parts : Master Thesis at the engine assembly at Scania CV AB." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-137825.

Повний текст джерела
Анотація:
A key challenge for manufacturing companies is to store parts in an efficient way atthe lowest cost possible. As the demand of differentiated products increases, togetherwith the fact that old products are not phased out at the same pace, the need of usingstorage space as dynamically as possible becomes vital.Scania’s engine assembly manufactures engines for various automotive vehicles andmarine & industry applications. The variation in engine range in Scania’s offeringleads to the need of holding a vast, and increasing, assortment of parts in the produc-tion. As a consequence, this puts more pressure on the logistics and furnishing withinthe engine assembly.This master thesis aims to facilitate the process of assigning parts’ storage locationsin the most profitable manner through an optimization model, the Location Model, inExcel VBA. Together with the model, suggestions of work methods are presented.By implementing the Location Model at Scania’s engine assembly, 4,98 % of all keptparts are recommended location changes, while resulting in cost savings, for the chosen30-day period. These location changes result in a cost saving of 6,73 % of the totallogistic costs for the same time period.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Combier, Camille. "Mesures de similarité pour cartes généralisées." Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995382.

Повний текст джерела
Анотація:
Une carte généralisée est un modèle topologique permettant de représenter implicitementun ensemble de cellules (sommets, arêtes, faces , volumes, . . .) ainsi que l'ensemblede leurs relations d'incidence et d'adjacence au moyen de brins et d'involutions. Les cartes généralisées sont notamment utilisées pour modéliser des images et objets3D. A ce jour il existe peu d'outils permettant l'analyse et la comparaison de cartes généralisées.Notre objectif est de définir un ensemble d'outils permettant la comparaisonde cartes généralisées.Nous définissons tout d'abord une mesure de similarité basée sur la taille de la partiecommune entre deux cartes généralisées, appelée plus grande sous-carte commune.Nous définissons deux types de sous-cartes, partielles et induites, la sous-carte induitedoit conserver toutes les involutions tandis que la sous-carte partielle autorise certaines involutions à ne pas être conservées. La sous-carte partielle autorise que les involutionsne soient pas toutes conservées en analogie au sous-graphe partiel pour lequelles arêtes peuvent ne pas être toutes présentes. Ensuite nous définissons un ensembled'opérations de modification de brins et de coutures pour les cartes généralisées ainsiqu'une distance d'édition. La distance d'édition est égale au coût minimal engendrépar toutes les successions d'opérations transformant une carte généralisée en une autrecarte généralisée. Cette distance permet la prise en compte d'étiquettes, grâce à l'opérationde substitution. Les étiquettes sont posées sur les brins et permettent d'ajouter del'information aux cartes généralisées. Nous montrons ensuite, que pour certains coûtsnotre distance d'édition peut être calculée directement à partir de la plus grande souscartecommune.Le calcul de la distance d'édition est un problème NP-difficile. Nous proposons unalgorithme glouton permettant de calculer en temps polynomial une approximation denotre distance d'édition de cartes. Nous proposons un ensemble d'heuristiques baséessur des descripteurs du voisinage des brins de la carte généralisée permettant de guiderl'algorithme glouton, et nous évaluons ces heuristiques sur des jeux de test générésaléatoirement, pour lesquels nous connaissons une borne de la distance.Nous proposons des pistes d'utilisation de nos mesures de similarités dans le domainede l'analyse d'image et de maillages. Nous comparons notre distance d'éditionde cartes généralisées avec la distance d'édition de graphes, souvent utilisée en reconnaissancede formes structurelles. Nous définissons également un ensemble d'heuristiquesprenant en compte les étiquettes de cartes généralisées modélisant des images etdes maillages. Nous mettons en évidence l'aspect qualitatif de notre appariement, permettantde mettre en correspondance des zones de l'image et des points du maillages.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

De, Souza Bento Da Silva Pedro Paulo. "On the mapping of distributed applications onto multiple Clouds." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN089/document.

Повний текст джерела
Анотація:
Le Cloud est devenu une plate-forme très répandue pour le déploiement d'applications distribuées. Beaucoup d'entreprises peuvent sous-traiter leurs infrastructures d'hébergement et, ainsi, éviter des dépenses provenant d'investissements initiaux en infrastructure et de maintenance.Des petites et moyennes entreprises, en particulier, attirés par le modèle de coûts sur demande du Cloud, ont désormais accès à des fonctionnalités comme le passage à l'échelle, la disponibilité et la fiabilité, qui avant le Cloud étaient presque réservées à de grandes entreprises.Les services du Cloud peuvent être offerts aux utilisateurs de plusieurs façons. Dans cette thèse, nous nous concentrons sur le modèle d'Infrastructure sous Forme de Service. Ce modèle permet aux utilisateurs d’accéder à des ressources de calcul virtualisés sous forme de machine virtuelles (MVs).Pour installer une application distribuée, un client du Cloud doit d'abord définir l'association entre son application et l'infrastructure. Il est nécessaire de prendre en considération des contraintesde coût, de ressource et de communication pour pouvoir choisir un ensemble de MVs provenant d'opérateurs de Cloud publiques et privés le plus adaptés. Cependant, étant donné la quantité exponentiel de configurations, la définition manuelle de l'association entre application et infrastructure peut être un challenge dans des scénarios à large échelle ou ayant des contraintes importantes de temps. En effet, ce problème est une généralisation du problème de calcul de homomorphisme de graphes, qui est NP-complet.Dans cette thèse, nous adressons le problème de calculer des placements initiaux et de reconfiguration pour des applications distribuées sur potentiellement de multiples Clouds. L'objectif est de minimiser les coûts de location et de migration en satisfaisant des contraintes de ressources et communications. Pour cela, nous proposons des heuristiques performantes capables de calculer des placements de bonne qualité très rapidement pour des scénarios à petite et large échelles. Ces heuristiques, qui sont basées sur des algorithmes de partition de graphes et de vector packing, ont été évaluées en les comparant avec des approches de l'état de l'art comme des solveurs exactes et des méta-heuristiques. Nous montrons en utilisant des simulations que les heuristiques proposées arrivent à calculer des solutions de bonne qualité en quelques secondes tandis que des autres approches prennent des heures ou jours pour les calculer
The Cloud has become a very popular platform for deploying distributed applications. Today, virtually any credit card holder can have access to Cloud services. There are many different ways of offering Cloud services to customers. In this thesis we especially focus on theInfrastructure as a Service (IaaS), a model that, usually, proposes virtualized computing resources to costumers in the form of virtual machines (VMs). Thanks to its attractive pay-as-you-use cost model, it is easier for customers, specially small and medium companies, to outsource hosting infrastructures and benefit of savings related to upfront investments and maintenance costs. Also, customers can have access to features such as scalability, availability, and reliability, which previously were almost exclusive for large companies. To deploy a distributed application, a Cloud customer must first consider the mapping between her application (or its parts) to the target infrastructure. She needs to take into consideration cost, resource, and communication constraints to select the most suitable set of VMs, from private and public Cloud providers. However, defining a mapping manually may be a challenge in large-scale or time constrained scenarios since the number of possible configuration explodes. Furthermore, when automating this process, scalability issues must be taken into account given that this mapping problem is a generalization of the graph homomorphism problem, which is NP-complete.In this thesis we address the problem of calculating initial and reconfiguration placements for distributed applications over possibly multiple Clouds. Our objective is to minimize renting and migration costs while satisfying applications' resource and communication constraints. We concentrate on the mapping between applications and Cloud infrastructure. Using an incremental approach, we split the problem into three different parts and propose efficient heuristics that can compute good quality placements very quickly for small and large scenarios. These heuristics are based on graph partition and vector packing heuristics and have been extensively evaluated against state of the art approaches such as MIP solvers and meta-heuristics. We show through simulations that the proposed heuristics manage to compute solutions in a few seconds that would take many hours or days for other approaches to compute
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Renaud-Goud, Paul. "Energy-aware scheduling : complexity and algorithms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00744247.

Повний текст джерела
Анотація:
In this thesis we have tackled a few scheduling problems under energy constraint, since the energy issue is becoming crucial, for both economical and environmental reasons. In the first chapter, we exhibit tight bounds on the energy metric of a classical algorithm that minimizes the makespan of independent tasks. In the second chapter, we schedule several independent but concurrent pipelined applications and address problems combining multiple criteria, which are period, latency and energy. We perform an exhaustive complexity study and describe the performance of new heuristics. In the third chapter, we study the replica placement problem in a tree network. We try to minimize the energy consumption in a dynamic frame. After a complexity study, we confirm the quality of our heuristics through a complete set of simulations. In the fourth chapter, we come back to streaming applications, but in the form of series-parallel graphs, and try to map them onto a chip multiprocessor. The design of a polynomial algorithm on a simple problem allows us to derive heuristics on the most general problem, whose NP-completeness has been proven. In the fifth chapter, we study energy bounds of different routing policies in chip multiprocessors, compared to the classical XY routing, and develop new routing heuristics. In the last chapter, we compare the performance of different algorithms of the literature that tackle the problem of mapping DAG applications to minimize the energy consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Tchuani, tchakonte Diane. "Minimisation de la consommation d’énergie des réseaux de capteurs dans les applications de couverture de cibles." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT043.

Повний текст джерела
Анотація:
Les réseaux de capteurs sans fil (WSN) se composent de petits nœuds de capteurs dotés de microcontrôleurs intégrés, de radios à faible consommation, de batteries et de capteurs utilisés pour surveiller les conditions environnementales telles que la température, la pression, l'humidité et les vibrations. De nos jours, ces réseaux sont utilisés dans un large éventail d'applications militaires, sanitaires, domestiques, urbaines, industrielles et environnementales. Les applications de couverture de cibles sont celles où plusieurs points d’intérêt appelés cibles doivent être surveillés en permanence par des nœuds de capteurs. Dans la plupart des applications de couverture de cibles, les nœuds de capteurs ont une ressource d'énergie limitée et il est donc essentiel de gérer efficacement leur consommation d'énergie afin de prolonger la durée de vie du réseau. Une approche commune pour résoudre ce problème consiste à alterner le fonctionnement des nœuds de capteurs entre le mode actif et le mode veille. Le problème de former des sous-ensembles de nœuds de capteurs qui seront activés successivement pendant des durées définies tandis que les autres nœuds seront en veille, afin de maximiser la durée de vie du réseau, est NP-difficile et appelé MLCP (Maximum Lifetime Coverage Problem). Dans cette thèse, notre objectif est de proposer de nouvelles heuristiques pour le MLCP tout en considérant des hypothèses plus réalistes sur la durée de vie et la consommation énergétique des nœuds de capteurs. Tout d'abord, nous proposons deux heuristiques gloutonnes en supposant que les nœuds de capteurs n'ont pas nécessairement la même durée de vie. La première heuristique est basée sur une méthode adaptative tandis que la seconde utilise l’idée de liste noire, ce qui permet d’optimiser la gestion des cibles critiques, c’est-à-dire qui sont couvertes par un nombre minimal de capteurs. Ensuite, en considérant que l'énergie consommée par les nœuds de capteurs mis en veille n'est pas négligeable, nous proposons une troisième heuristique gloutonne qui prend en compte l'énergie résiduelle des nœuds de capteurs dans le choix des nœuds à activer. Puis, nous proposons une approche récurrente pour les réseaux réguliers. Nous étudions ensuite une famille d'instances difficiles du MLCP, à savoir la sous-famille des réseaux réguliers consitués par les anneaux de taille impaire. Nous proposons pour cette sous-classe, une approche analytique permettant d’obtenir des solutions efficaces dont nous conjecturons l’optimalité. Enfin, nous développons un système de surveillance de la pollution atmosphérique et de détection d'incendie basé sur un réseau de capteurs sans fil et nous évaluons le gain en termes de durée de vie du réseau lorsqu'un algorithme pour le MLCP est intégré dans un tel système
Wireless Sensor Networks (WSN) consist of tiny sensor nodes with embedded microcontrollers, low power radios, battery cells and sensors which are used to monitor environmental conditions such as temperature, pressure, humidity, and vibration. Today, these networks are used in a wide range of military, health, domestic, urban, industrial and environmental applications. Target coverage applications are those where several points of interest called targets must be continuously monitored by sensor nodes. In most target coverage applications, sensor nodes have a limited amount of energy and it is therefore critical to efficiently manage their energy consumption in order to extend the network lifetime. A common approach to tackle this problem is to alternate the operation of sensor nodes between active and sleep mode. The scheduling of appropriate subsets of sleep/active sensor nodes in order to maximize the network lifetime is an NP-hard problem called Maximum Lifetime Coverage Problem (MLCP). In this thesis, we aim at proposing new heuristics to the MLCP, while considering more realistic assumptions on lifetime and energy consumption of sensor nodes. Firstly, we propose two greedy heuristics with the assumption that the sensor nodes do not necessarily have the same lifetime. The first heuristic is based on an adaptive method while the second uses the idea of blacklist, which allows to optimize the management of least covered targets called critical targets. Secondly, by considering that the energy consumed by sensor nodes put in sleep mode is not negligible, we propose a third greedy heuristic that takes into account the remaining energy of the sensor nodes in the choice of the nodes to activate. Then, we propose a recurring approach for regular networks. We then study a family of hard instances of the MLCP, namely the sub-family of regular networks composed of odd-sized rings. We propose for this subclass, an analytical approach to obtain effective solutions which we conjecture optimality. Finally, we develop a system for air pollution monitoring and fire detection based on a wireless sensor network and we evaluate the network lifetime gain when an algorithm for the MLCP is integrated in such a system
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Yang, Tzu-Hsuan, and 楊子萱. "Heuristic and iterative greedy algorithms for the total weighted completion time multi-facility order scheduling with release times." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/mn56d5.

Повний текст джерела
Анотація:
碩士
逢甲大學
統計學系統計與精算碩士班
105
The customer order scheduling problem has received growing attention in the research community. However, as research regarding customer order scheduling problems with ready times is relatively limited, this paper studies a multi-facility order scheduling problem with ready times where the measure criterion is to minimize the total weighted completion time of all the given orders. To solve this intractable problem, we first derive several dominance rules and two lower bounds used in a branch-and-bound method to find an exact solution. We then modify five existing heuristics and adopt an iterative greedy (IG) algorithm to find a near-optimal solution. Specifically, we use a fixed proportion of destruction and a linear function of destruction in the IG algorithm. To get a good quality of solutions, all the proposed heuristics are improved by pairwise improvement, and we perform one-way analysis of variance and Fisher’s least significant difference tests to evaluate and compare the performances of all the proposed algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Sahasrabudhe, Nachiket S. "Joint Congestion Control, Routing And Distributed Link Scheduling In Power Constrained Wireless Mesh Networks." Thesis, 2008. http://hdl.handle.net/2005/798.

Повний текст джерела
Анотація:
We study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh networks, where the nodes in the network are subjected to energy expenditure rate constraints. As wireless scenario does not allow all the links to be active all the time, only a subset of given links can be active simultaneously. We model the inter-link interference using the link contention graph. All the nodes in the network are power-constrained and we model this constraint using energy expenditure rate matrix. Then we formulate the problem as a network utility maximization (NUM) problem. We notice that this is a convex optimization problem with affine constraints. We apply duality theory and decompose the problem into two sub-problems namely, network layer congestion control and routing problem, and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. The optimal scheduler selects that set of non-interfering links, for which the sum of link prices is maximum. We study the effects of energy expenditure rate constraints of the nodes on the maximum possible network utility. It turns out that the dominant of the two constraints namely, the link capacity constraint and the node energy expenditure rate constraint affects the network utility most. Also we notice the fact that the energy expenditure rate constraints do not affect the nature of optimal link scheduling problem. Following this fact, we study the problem of distributed link scheduling. Optimal scheduling requires selecting independent set of maximum aggregate price, but this problem is known to be NP-hard. We first show that as long as scheduling policy selects the set of non-interfering links, it can not go unboundedly away from the optimal solution of network utility maximization problem. Then we proceed and evaluate a simple greedy scheduling algorithm. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Vashistha, Sumit. "Energy Efficient Scheduling Of Sensing Activity In Wireless Sensor Networks Using Information Coverage." Thesis, 2007. http://hdl.handle.net/2005/598.

Повний текст джерела
Анотація:
Network lifetime is a key issue in wireless sensor networks where sensor nodes, distributed typically in remote/hostile sensing areas, are powered by finite energy batteries which are not easily replaced/recharged. Depletion of these finite energy batteries can result in a change in network topology or in the end of network life itself. Hence, prolonging the life of wireless sensor networks is important. Energy consumed in wireless sensor nodes can be for the purpose of i) sensing functions, ii) processing/computing functions, and ii) communication functions. For example, energy consumed by the transmit and receive electronics constitute the energy expended for communication functions. Our focus in this thesis is on the efficient use of energy for sensing. In particular, we are concerned with energy efficient algorithms for scheduling the sensing activity of sensor nodes. By scheduling the sensing activity we mean when to activate a sensor node for sensing (active mode) and when to keep it idle (sleep mode). The novel approach we adopt in this thesis to achieve efficient scheduling of sensing activity is an information coverage approach, rather than the widely adopted physical coverage approach. In the physical coverage approach, a point is said to be covered by a sensor node if that point lies within the physical coverage range (or the sensing radius) of that sensor, which is the maximum distance between the sensor and the point up to which the sensor can sense with acceptable accuracy. Information coverage, on the other hand, exploits cooperation among multiple sensors to accurately sense a point even if that point falls outside the physical coverage range of all the sensors. In this thesis, we address the question of how to schedule the activity of various sensor nodes in the network to enhance network lifetime using information coverage. In the first part of the thesis, we are concerned with scheduling of sensor nodes for sensing point targets using information coverage – example of a point-target being temperature or radiation level at a source or point that needs to be monitored. Defining a set of sensor nodes which collectively can sense a point-target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC) that can sense multiple point-targets in a given sensing area. Being disjoint, the resulting information covers in the proposed algorithm allow a simple round-robin schedule of sensor activation (i.e., activate the covers sequentially). We show that the covers obtained using the proposed DSIC algorithm achieve longer network life compared to the covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms. In the second part of the thesis, we extend the point target sensing problem in the first part to a full area sensing problem, where we are concerned with energy efficient ‘area-monitoring’ using information coverage. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage requirement (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime. The algorithms proposed for the point targets and full area sensing problems in the first two parts are essentially centralized algorithms. Decentralized algorithms are preferred in large networks. Accordingly, in the last part of the thesis, we propose a low-complexity, distributed sensor scheduling algorithm for full area sensing using information coverage. This distributed algorithm is shown to result in significantly longer sensing lifetimes compared to those achieved using physical coverage.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії