Siga este enlace para ver otros tipos de publicaciones sobre el tema: Competitive algorithms.

Tesis sobre el tema "Competitive algorithms"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Competitive algorithms".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Li, Rongbin y 李榕滨. "New competitive algorithms for online job scheduling". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/197555.

Texto completo
Resumen
Job scheduling, which greatly impacts on the system performance, is a fundamental problem in computer science. In this thesis, we study three kinds of scheduling problems, that is, deadline scheduling, due date scheduling, and flow time scheduling. Traditionally, the major concern for scheduling problems is the system performance, i.e. the “Quality of Service" (QoS). Different scheduling problems use different QoS measurements. For deadline scheduling, the most common QoS to optimize is the throughput; for due date scheduling, it is the total quoted lead time; and for flow time scheduling, it is the total (weighted) flow time. Recently, energy efficiency is becoming more and more important. Many modern processors adopt technologies like dynamic speed scaling and sleep management to reduce energy usage. Much work is done on energy efficient scheduling. In this thesis, we study this topic for all three kinds of scheduling mentioned above. Meanwhile, we also revisit the traditional flow time scheduling problem to optimize the QoS. However, we consider the problem in a more realistic model that makes the problem much more challenging. Below is the summary of the problems studied in the thesis. First, we consider the tradeoff between energy and throughput for deadline scheduling. Specifically, each job is associated with a value (or importance) and a deadline. A scheduling algorithm is allowed to discard some of the jobs, and the objective is to minimize total energy usage plus total value of discarded jobs. When processor's maximum speed is unbounded, we propose an O(1)-competitive algorithm. When processor's maximum speed is bounded, we show a strong lower bound and give an algorithm with a competitive ratio close to that lower bound. Second, we study energy efficient due date scheduling. Jobs arrive online with different sizes and weights. An algorithm needs to assign a due date to each job once it arrives, and complete the job by the due date. The quoted lead time of a job equals its due date minus its arrival time, multiplied by its weight. We propose a competitive algorithm for minimizing the sum of the total quoted lead time and energy usage. Next, we consider flow time scheduling with power management on multiple machines. Jobs with arbitrary sizes and weights arrive online. Each machine consumes different amount of energy when processing a job, idling or sleeping. A scheduler has to maintain a good balance of the states of the machines to avoid energy wastage and, meanwhile, guarantee high QoS. Our result is an O(1)-competitive algorithm to minimize total weighted flow time plus energy usage. Finally, we consider the traditional preemptive scheduling to minimize total flow time. Previous theoretical results often assume preemption is free, which is not true for most systems. We investigate the complexity of the problem when a processor has to perform a certain amount of overhead before it resumes the execution of a job preempted before. We first show an Ω(n^(1/4)) lower bound, and then, propose a (1+ε)-speed (1+ 1/ε )-competitive algorithm in resource augmentation model.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wong, Chiu Wai M. Eng Massachusetts Institute of Technology. "Competitive algorithms for online matching and vertex cover problems". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85521.

Texto completo
Resumen
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 73-75).
The past decade has witnessed an explosion of research on the online bipartite matching problem. Surprisingly, its dual problem, online bipartite vertex cover, has never been explicitly studied before. One of the motivation for studying this problem is that it significantly generalizes the classical ski rental problem. An instance of such problems specifies a bipartite graph G = (L, R, E) whose left vertices L are offline and right vertices arrive online one at a time. An algorithm must maintain a valid vertex cover from which no vertex can ever be removed. The objective is to minimize the size of the cover. In this thesis, we introduce a charging-based algorithmic framework for this problem as well as its generalizations. One immediate outcome is a simple analysis of an optimal 1/1-1/e- competitive algorithm for online bipartite vertex cover. By extending the charging-based analysis in various nontrivial ways, we also obtain optimal l_1 e-competitive algorithms for the edge-weighted and submodular versions of online bipartite vertex cover, which all match the best performance of ski rental. As an application, we show that by analyzing our algorithm in the primal-dual framework, our result on submodular vertex cover implies an optimal (1/1-1/e)-competitive algorithm for its dual, online bipartite submodular matching. This problem is a generalization of online bipartite matching and may have applications in display ad allocation. We consider also the more general scenario where all the vertices are online and the graph is not necessarily bipartite, which is known as the online fractional vertex cover and matching problems. Our contribution in this direction is a primal-dual 1.901-competitive (or 1/1.901 ~~ 0.526) algorithm for these problems. Previously, it was only known that they admit a simple well-known 2-competitive (or 1/2) greedy algorithm. Our result is the first successful attempt to beat the greedy algorithm for these two problems. Moreover, our algorithm for the online matching problem significantly generalizes the traditional online bipartite graph matching problem, where vertices from only one side of the bipartite graph arrive online. In particular, our algorithm improves upon the result of the fractional version of the online edge-selection problem in Blum et. al. (JACM '06). Finally, on the hardness side, we show that no randomized online algorithm can achieve a competitive ratio better than 1.753 and 0.625 for the online fractional vertex cover problem and the online fractional matching problem respectively, even for bipartite graphs.
by Chiu Wai Wong.
M. Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Chan, Sze-hang y 陳思行. "Competitive online job scheduling algorithms under different energy management models". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/206690.

Texto completo
Resumen
Online flow-time scheduling is a fundamental problem in computer science and has been extensively studied for years. It is about how to design a scheduler to serve computer jobs with unpredictable arrival times and varying sizes and priorities so as to minimize the total flow time (better understood as response time) of jobs. It has many applications, most notable in the operating of server farms. As energy has become an important issue, the design of scheduler also has to take power management into consideration, for example, how to scale the speed of the processors dynamically. The objectives are orthogonal as one would prefer lower processor speed to save energy, yet a good quality of service must be retained. In this thesis, I study a few scheduling problems for energy and flow time in depth and give new algorithms to tackle them. The competitiveness of our algorithms is guaranteed with worst-case mathematical analysis against the best possible or hypothetical solutions. In the speed scaling model, the power of a processor increases with its speed according to a certain function (e.g., a cubic function of speed). Among all online scheduling problems with speed scaling, the nonclairvoyant setting (in which the size of a job is not known during its execution) with arbitrary priorities is perhaps the most challenging. This thesis gives the first competitive algorithm called WLAPS for this setting. In reality, it is not uncommon that during the peak-load period, some (low-priority) users have their jobs rejected by the servers. This triggers me to study more complicated scheduling algorithms that can strike a good balance among speed scaling, flow time and rejection penalty. Two new algorithms UPUW and HDFAC for different models of rejection penalty have been proposed and analyzed. Last, but perhaps the most interesting, we study power management in large server farm environment in which the primary energy saving mechanism is to put some processors to sleep. Two new algorithms POOL and SATA have been designed to tackle jobs that cannot and can migrate among the processors, respectively. They are integrated algorithms that can consider speed scaling, job scheduling and processor sleep management together to optimize the energy usage and ow time simultaneously. These algorithms are again proven mathematically to be competitive even in the worst case.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

McNeill, Dean K. "Adaptive visual representations for autonomous mobile robots using competitive learning algorithms". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35045.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Zhang, Kening. "A COMPETITIVE RECONFIGURATION APPROACH TO AUTONOMOUS FAULT HANDLING USING GENETIC ALGORITHMS". Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2319.

Texto completo
Resumen
In this dissertation, a novel self-repair approach based on Consensus Based Evaluation (CBE) for autonomous repair of SRAM-based Field Programmable Gate Arrays (FPGAs) is developed, evaluated, and refined. An initial population of functionally identical (same input-output behavior), yet physically distinct (alternative design or place-and-route realization) FPGA configurations is produced at design time. During run-time, the CBE approach ranks these alternative configurations after evaluating their discrepancy relative to the consensus formed by the population. Through runtime competition, faults in the logical resources become occluded from the visibility of subsequent FPGA operations. Meanwhile, offspring formed through crossover and mutation of faulty and viable configurations are selected at a controlled re-introduction rate for evaluation and refurbishment. Refurbishments are evolved in-situ, with online real-time input-based performance evaluation, enhancing system availability and sustainability, creating an Organic Embedded System (OES). A fault tolerance model called N Modular Redundancy with Standby (NMRSB) is developed which combines the two popular fault tolerance techniques of NMR and Standby fault tolerance in order to facilitate the CBE approach. This dissertation develops two of instances of the NMRSB system – Triple Modular Redundancy with Standby (TMRSB) and Duplex with Standby (DSB). A hypothetical Xilinx Virtex-II Pro FPGA model demonstrates their viability for various applications including a 3-bit x 3-bit multiplier, and the MCNC91 benchmark circuits. Experiments conducted on the model iii evaluate the performance of three new genetic operators and demonstrate progress towards a completely self-contained single-chip implementation so that the FPGA can refurbish itself without requiring a PC host to execute the Genetic Algorithm. This dissertation presents results from the simulations of multiple applications with a CBE model implemented in the C++ programming language. Starting with an initial population of 20 and 30 viable configurations for TMRSB and DSB respectively, a single stuck-at fault is introduced in the logic resources. Fault refurbishment experiments are conducted under supervision of CBE using a fitness state evaluation function based on competing outputs, fitness adjustment, and different level threshold. The device remains online throughout the process by which a complete repair is realized with Hamming Distance and Bitweight voting schemes. The results indicate a Hamming Distance TMRSB approach can prevent the most pervasive fault impacts and realize complete refurbishment. Experimental results also show that the Autonomic Layer demonstrates 100% faulty component isolation for both Functional Elements (FEs) and Autonomous Elements (AEs) with randomly injected single and multiple faults. Using logic circuits from the MCNC-91 benchmark set, availability during repair phases averaged 75.05%, 82.21%, and 65.21% for the z4ml, cm85a, and cm138a circuits respectively under stated conditions. In addition to simulation, the proposed OES architecture synthesized from HDL was prototyped on a Xilinx Virtex II Pro FPGA device supporting partial reconfiguration to demonstrate the feasibility for intrinsic regeneration of the selected circuit.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Engineering PhD
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Premkumar, Aravind Preshant. "Competitive Algorithms and System for Multi-Robot Exploration of Unknown Environments". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78847.

Texto completo
Resumen
We present an algorithm to explore an orthogonal polygon using a team of p robots. This algorithm combines ideas from information-theoretic exploration algorithms and computational geometry based exploration algorithms. The algorithm is based on a single-robot polygon exploration algorithm and a tree exploration algorithm. We show that the exploration time of our algorithm is competitive (as a function of p) with respect to the offline optimal exploration algorithm. We discuss how this strategy can be adapted to real-world settings to deal with noisy sensors. In addition to theoretical analysis, we investigate the performance of our algorithm through simulations for multiple robots and experiments with a single robot.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Lorenz, Julian Michael. "Optimal trading algorithms : portfolio transactions, multiperiod portfolio selection, and competitive online search /". Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17746.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Liu, Ming. "Design and Evaluation of Algorithms for Online Machine Scheduling Problems". Phd thesis, Ecole Centrale Paris, 2009. http://tel.archives-ouvertes.fr/tel-00453316.

Texto completo
Resumen
Dans cette thèse, nous proposons et évaluons des algorithmes pour résoudre des problèmes d'ordonnancement en ligne. Pendant des décennies, les études en ordonnancement considèrent des modèles déterministes où toutes les informations nécessaires pour la définition du problème sont supposées connues à l'avance. Cette hypothèse n'est généralement pas réaliste. Ceci a motivé les études sur l'ordonnancement en ligne. Dans un problème d'ordonnancement en ligne, un algorithme doit prendre des décisions sans connaissance du futur. L'analyse compétitive est généralement la méthode utilisée pour évaluer les performances de tels algorithmes. Dans cette analyse, la performance d'un algorithme en ligne est mesurée par le ratio compétitif qui est le ratio dans le pire cas entre la performance de la solution obtenue et celle d'une solution optimale hors ligne. Nous considérons principalement deux paradigmes en ligne: celui où les tâches se présentent dans la liste et celui où les tâches arrivent au fur et à mesure. Sur la base de ces deux paradigmes, nous considérons différents modèles : une seule machine, deux machines identiques parallèles, deux machines uniformes parallèles, batch machines et open shop. Pour chacun des problèmes, nous démontrons une borne inférieure de ratios compétitifs et proposons des algorithmes en ligne. Ensuite, nous évaluons la performance de ces algorithmes à l'aide de l'analyse compétitive. Pour certains problèmes, nous montrons que les algorithmes proposés sont optimaux dans le sens où le ratio compétitif est égal à la borne inférieure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Tsai, Carol Leanne. "Heuristic Algorithms for Agnostically Identifying the Globally Stable and Competitive Metastable Morphologies of Block Copolymer Melts". Thesis, University of California, Santa Barbara, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13423067.

Texto completo
Resumen

Block copolymers are composed of chemically distinct polymer chains that can be covalently linked in a variety of sequences and architectures. They are ubiquitous as ingredients of consumer products and also have applications in advanced plastics, drug delivery, advanced membranes, and next generation nano-lithographic patterning. The wide spectrum of possible block copolymer applications is a consequence of block copolymer self-assembly into periodic, meso-scale morphologies as a function of varying block composition and architecture in both melt and solution states, and the broad spectrum of physical properties that such mesophases afford.

Materials exploration and discovery has traditionally been pursued through an iterative process between experimental and theoretical/computational collaborations. This process is often implemented in a trial-and-error fashion, and from the computational perspective of generating phase diagrams, usually requires some existing knowledge about the competitive phases for a given system. Self-Consistent Field Theory (SCFT) simulations have proven to be both qualitatively and quantitatively accurate in the determination, or forward mapping, of block copolymer phases of a given system. However, it is possible to miss candidates. This is because SCFT simulations are highly dependent on their initial configurations, and the ability to map phase diagrams requires a priori knowledge of what the competing candidate morphologies are. The unguided search for the stable phase of a block copolymer of a given composition and architecture is a problem of global optimization. SCFT by itself is a local optimization method, so we can combine it with population-based heuristic algorithms geared at global optimization to facilitate forward mapping. In this dissertation, we discuss the development of two such methods: Genetic Algorithm + SCFT (GA-SCFT) and Particle Swarm Optimization + SCFT (PSO-SCFT). Both methods allow a population of configurations to explore the space associated with the numerous states accessible to a block copolymer of a given composition and architecture.

GA-SCFT is a real-space method in which a population of SCFT field configurations “evolves” over time. This is achieved by initializing the population randomly, allowing the configurations to relax to local basins of attraction using SCFT simulations, then selecting fit members (lower free energy structures) to recombine their fields and undergo mutations to generate a new “generation” of structures that iterate through this process. We present results from benchmark testing of this GA-SCFT technique on the canonical AB diblock copolymer melt, for which the theoretical phase diagram has long been established. The GA-SCFT algorithm successfully predicts many of the conventional mesophases from random initial conditions in large, 3-dimensional simulation cells, including hexagonally-packed cylinders, BCC-packed spheres, and lamellae, over a broad composition range and weak to moderate segregation strength. However, the GA-SCFT method is currently not effective at discovery of network phases, such as the Double-Gyroid (GYR) structure.

PSO-SCFT is a reciprocal space approach in which Fourier components of SCFT fields near the principal shell are manipulated. Effectively, PSO-SCFT facilitates the search through a space of reciprocal-space SCFT seeds which yield a variety of morphologies. Using intensive free energy as a fitness metric by which to compare these morphologies, the PSO-SCFT methodology allows us to agnostically identify low-lying competitive and stable morphologies. We present results for applying PSO-SCFT to conformationally symmetric diblock copolymers and a miktoarm star polymer, AB4, which offers a rich variety of competing sphere structures. Unlike the GA-SCFT method we previously presented, PSO-SCFT successfully predicts the double gyroid morphology in the AB-diblock. Furthermore, PSO-SCFT successfully recovers the A 15 morphology at a composition where it is expected to be stable in the miktoarm system, as well as several competitive metastable candidates, and a new sphere morphology belonging to the hexagonal space group 191, which has not been seen before in polymer systems. Thus, we believe the PSO-SCFT method provides a promising platform for screening for competitive structures in a given block copolymer system.

Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Nayyar, Krati. "Input Sensitive Analysis of a Minimum Metric Bipartite Matching Algorithm". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/86518.

Texto completo
Resumen
In various business and military settings, there is an expectation of on-demand delivery of supplies and services. Typically, several delivery vehicles (also called servers) carry these supplies. Requests arrive one at a time and when a request arrives, a server is assigned to this request at a cost that is proportional to the distance between the server and the request. Bad assignments will not only lead to larger costs but will also create bottlenecks by increasing delivery time. There is, therefore, a need to design decision-making algorithms that produce cost-effective assignments of servers to requests in real-time. In this thesis, we consider the online bipartite matching problem where each server can serve exactly one request. In the online minimum metric bipartite matching problem, we are provided with a set of server locations in a metric space. Requests arrive one at a time that have to be immediately and irrevocably matched to a free server. The total cost of matching all the requests to servers, also known as the online matching is the sum of the cost of all the edges in the matching. There are many well-studied models for request generation. We study the problem in the adversarial model where an adversary who knows the decisions made by the algorithm generates a request sequence to maximize ratio of the cost of the online matching and the minimum-cost matching (also called the competitive ratio). An algorithm is a-competitive if the cost of online matching is at most 'a' times the minimum cost. A recently discovered robust and deterministic online algorithm (we refer to this as the robust matching or the RM-Algorithm) was shown to have optimal competitive ratios in the adversarial model and a relatively weaker random arrival model. We extend the analysis of the RM-Algorithm in the adversarial model and show that the competitive ratio of the algorithm is sensitive to the input, i.e., for "nice" input metric spaces or "nice" server placements, the performance guarantees of the RM-Algorithm is significantly better. In fact, we show that the performance is almost optimal for any fixed metric space and server locations.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Dogeas, Konstantinos. "Energy Minimization, Data Movement and Uncertainty : Models and Algorithms". Electronic Thesis or Diss., Sorbonne université, 2022. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2022SORUS070.pdf.

Texto completo
Resumen
Les plateformes de calcul haute performance (HPC) sont la solution idéale pour exécuter des applications exigeantes en termes de calcul. Étant donné leur consommation importante en énergie, le besoin d'algorithmes plus efficaces en termes d'énergie est indispensable. De meilleurs algorithmes d'ordonnancement peuvent être conçus en exploitant les caractéristiques essentielles d'une plateforme HPC, telles que sa topologie de réseau et l'hétérogénéité de ses machines. On peut également obtenir de meilleures performances en concevant des modèles plus réalistes, qui saisissent les fonctionnalités d'applications réelles. Ainsi, permettre aux algorithmes d’ordonnancement de décider de la quantité de ressources allouées à une application, ou de la vitesse d'exécution des machines, peut ouvrir la voie à de nouvelles implémentations compatibles avec la plateforme. Dans la première partie de la thèse, nous introduisons un modèle qui prend en compte à la fois la topologie et l'hétérogénéité d'une plateforme en introduisant deux types de machines. Nous augmentons le problème d'ordonnancement avec des contraintes dont le but est de réduire implicitement le mouvement des données pendant l'exécution des tâches sur des machines parallèles, et lors de la communication avec le système de fichiers. Nous proposons des algorithmes qui ordonnancent les tâches au cours du temps, et décident du nombre de ressources allouées à une tâche, en tenant compte de ces contraintes supplémentaires. Dans la deuxième partie de la thèse, on s'intéresse à l'incertitude liée à la charge de travail d'une application, cette charge étant directement liée au temps nécessaire à son exécution. La plupart des travaux de la littérature considèrent cette valeur connue à l'avance. C'est cependant rarement le cas dans les systèmes réels. Dans notre approche, la charge de travail donnée est une charge possible mais qui peut éventuellement être réduite. On introduit alors des tests spécifiques à l'application qui peuvent réduire la charge de travail d'une tâche. Étant donné que le test (par exemple, la compression) doit également être exécuté, et que la quantité de réduction (par exemple, la taille) est inconnue avant la fin du test, la décision d'exécuter ou non le test pour une tâche doit être prise. On propose des algorithmes compétitifs pour le problème d'ordonnancement de telles tâches, dans le but de minimiser l'énergie consommée par un ensemble de machines pour lesquelles on peut modifier la vitesse. Dans la troisième partie de la thèse, nous nous intéressons à un contexte similaire d'entrées incertaines et nous considérons un modèle dans lequel les temps d’exécution des tâches ne sont pas connus à l'avance. Nous augmentons l'entrée du problème en introduisant des valeurs prédites des temps d'exécution.Nous concevons alors des algorithmes qui ont d'excellentes performances lorsque les prédictions sont exactes, tout en restant compétitifs lorsque les prédictions se révèlent inexactes
High performance computers (HPCs) is the go-to solution for running computationally demanding applications. As the limit of energy consumption is already achieved, the need for more energy efficient algorithms is critical.Taking advantage of the core characteristics of an HPC, such as its network topology and the heterogeneity of the machines, could lead to better scheduling algorithms. In addition, designing more realistic models, that grasp the features of real-life applications, is a work in the same direction of achieving better performance. Allowing scheduling algorithms to decide either the amount of resources allocated to an application or the running speed of the resources can pave the path to new platform-aware implementations. In the first part of the thesis, we introduce a model which takes into account both the topology and the heterogeneity of a platform by introducing two kind of machines. We augment the scheduling problem with constraints whose purpose is to implicitly reduce data movement either during parallel execution or during the communication with the file system. We propose algorithms that can decide the number of resources allocated to an application taking into consideration the extra constraints.In the second part of the thesis, we deal with the uncertainty on part of the input and more specifically, the workload of an application, that is strictly related to the time needed for its completion. Most works in the literature consider this value known in advance. However, this is rarely the case in real-life systems.In our approach, the given workload is a worst case scenario for the execution of an application. We introduce application-specific tests that may decrease the workload of a task.Since the test (e.g. compression) takes some time, and since the amount of reduction (e.g. in size) is unknown before the completion of the test, the decision of running the test for a task or not has to be taken. We propose competitive algorithms for the problem of scheduling such tasks, in order to minimize the energy consumed in a set of speed-adjustable machines. In the third part of the thesis, we focus on a similar setting of uncertain input and we consider a model where the processing times are not known in advance. Here, we augment the input of the problem by introducing predicted values in place of the unknown processing times. We design algorithms that perform optimally when the predictions are accurate while remaining competitive to the best known ones otherwise
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Gaspar, Cristian. "Variations on the Theme of Caching". Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1048.

Texto completo
Resumen
This thesis is concerned with caching algorithms. We investigate three variations of the caching problem: web caching in the Torng framework, relative competitiveness and caching with request reordering.

In the first variation we define different cost models involving page sizes and page costs. We also present the Torng cost framework introduced by Torng in [29]. Next we analyze the competitive ratio of online deterministic marking algorithms in the BIT cost model combined with the Torng framework. We show that given some specific restrictions on the set of possible request sequences, any marking algorithm is 2-competitive.

The second variation consists in using the relative competitiveness ratio on an access graph as a complexity measure. We use the concept of access graphs introduced by Borodin [11] to define our own concept of relative competitive ratio. We demonstrate results regarding the relative competitiveness of two cache eviction policies in both the basic and the Torng framework combined with the CLASSICAL cost model.

The third variation is caching with request reordering. Two reordering models are defined. We prove some important results about the value of a move and number of orderings, then demonstrate results about the approximation factor and competitive ratio of offline and online reordering schemes, respectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Meißner, Julie [Verfasser], Nicole [Akademischer Betreuer] Megow, Martin [Akademischer Betreuer] Skutella, Nicole [Gutachter] Megow, Martin [Gutachter] Skutella y Leen [Gutachter] Stougie. "Uncertainty exploration : algorithms, competitive analysis, and computational experiments / Julie Meißner ; Gutachter: Nicole Megow, Martin Skutella, Leen Stougie ; Nicole Megow, Martin Skutella". Berlin : Technische Universität Berlin, 2018. http://d-nb.info/1166752348/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Verma, Poonam Santosh. "Non-equilibrium surface growth for competitive growth models and applications to conservative parallel discrete event simulations". Diss., Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-11092007-141815.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Negoescu, Andrei [Verfasser], Ulrich [Akademischer Betreuer] Meyer, Georg [Akademischer Betreuer] Schnitger y Alejandro [Akademischer Betreuer] López-Ortiz. "Design of competitive paging algorithms with good behaviour in practice / Andrei Negoescu. Gutachter: Ulrich Meyer ; Georg Schnitger ; Alejandro López-Ortiz. Betreuer: Ulrich Meyer". Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2013. http://d-nb.info/1044094524/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Negoescu, Andrei Laurian [Verfasser], Ulrich [Akademischer Betreuer] Meyer, Georg [Akademischer Betreuer] Schnitger y Alejandro [Akademischer Betreuer] López-Ortiz. "Design of competitive paging algorithms with good behaviour in practice / Andrei Negoescu. Gutachter: Ulrich Meyer ; Georg Schnitger ; Alejandro López-Ortiz. Betreuer: Ulrich Meyer". Frankfurt am Main : Univ.-Bibliothek Frankfurt am Main, 2013. http://nbn-resolving.de/urn:nbn:de:hebis:30:3-321283.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Mattos, CÃsar Lincoln Cavalcante. "ComitÃs de Classificadores Baseados nas Redes SOM e Fuzzy ART com Sintonia de ParÃmetros e SeleÃÃo de Atributos via MetaheurÃsticas EvolucionÃrias". Universidade Federal do CearÃ, 2011. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=7034.

Texto completo
Resumen
CoordenaÃÃo de AperfeiÃoamento de Pessoal de NÃvel Superior
O paradigma de classificaÃÃo baseada em comitÃs tem recebido considerÃvel atenÃÃo na literatura cientÃfica em anos recentes. Neste contexto, redes neurais supervisionadas tÃm sido a escolha mais comum para compor os classificadores base dos comitÃs. Esta dissertaÃÃo tem a intenÃÃo de projetar e avaliar comitÃs de classificadores obtidos atravÃs de modificaÃÃes impostas a algoritmos de aprendizado nÃo-supervisionado, tais como as redes Fuzzy ART e SOM, dando origem, respectivamente, Ãs arquiteturas ARTIE (ART in Ensembles) e MUSCLE (Multiple SOM Classifiers in Ensembles). A sintonia dos parÃmetros e a seleÃÃo dos atributos das redes neurais que compÃem as arquiteturas ARTIE e MUSCLE foram tratados por otimizaÃÃo metaheurÃstica, a partir da proposiÃÃo do algoritmo I-HPSO (Improved Hybrid Particles Swarm Optimization). As arquiteturas ARTIE e MUSCLE foram avaliadas e comparadas com comitÃs baseados nas redes Fuzzy ARTMAP, LVQ e ELM em 12 conjuntos de dados reais. Os resultados obtidos indicam que as arquiteturas propostas apresentam desempenhos superiores aos dos comitÃs baseados em redes neurais supervisionadas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Jin, Shendan. "Online computation beyond standard models". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS152.

Texto completo
Resumen
Dans le cadre standard du calcul en ligne, l’entrée de l’algorithme n’est pas entièrement connue à l’avance, mais elle est révélée progressivement sous forme d’une séquence de requêtes. Chaque fois qu'une requête arrive, l'algorithme en ligne doit prendre des décisions irrévocables pour servir la demande, sans connaissance des requêtes futures. Dans le domaine des algorithmes en ligne, le cadre standard utilisé pour évaluer les performances des algorithmes en ligne est l’analyse compétitive. De manière informelle, le concept d’analyse compétitive consiste à comparer les performances d’un algorithme en ligne dans le pire des cas à une solution optimale hors ligne qui aurait pu être calculée si toutes les données étaient connues d’avance. Dans cette thèse, nous étudierons de nouvelles façons d'approcher les problèmes en ligne. Dans un premier temps, nous étudions le calcul en ligne dans le modèle avec ré-optimisation, dans lequel l'irrévocabilité des décisions en ligne est relâchée. Autrement dit, l'algorithme en ligne est autorisé à revenir en arrière et changer les décisions précédemment prises. Plus précisément, nous montrons comment identifier le compromis entre le nombre de ré-optimisation et les performances des algorithmes en ligne pour le problème de couplage maximale en ligne. De plus, nous étudions des mesures autres que l'analyse compétitive pour évaluer les performances des algorithmes en ligne. Nous observons que parfois, l'analyse compétitive ne peut pas distinguer les performances de différents algorithmes en raison de la nature la plus défavorable du ratio de compétitivité. Nous démontrons qu'une situation similaire se pose dans le problème de la recherche linéaire. Plus précisément, nous revisitons le problème de la recherche linéaire et introduisons une mesure, qui peut être appliquée comme un raffinement du ratio de compétitivité. Enfin, nous étudions le calcul en ligne dans le modèle avec des conseils, dans lequel l'algorithme reçoit en entrée non seulement une séquence de requêtes, mais aussi quelques conseils sur la séquence de requêtes. Plus précisément, nous étudions un modèle récent avec des conseils non fiables, dans lequel les conseils peuvent être fiables ou non. Supposons que dans ce dernier cas, les conseils peuvent être générés à partir d'une source malveillante. Nous montrons comment identifier une stratégie optimale de Pareto pour le problème online bidding dans le modèle de conseil non fiable
In the standard setting of online computation, the input is not entirely available from the beginning, but is revealed incrementally, piece by piece, as a sequence of requests. Whenever a request arrives, the online algorithm has to make immediately irrevocable decisions to serve the request, without knowledge on the future requests. Usually, the standard framework to evaluate the performance of online algorithms is competitive analysis, which compares the worst-case performance of an online algorithm to an offline optimal solution. In this thesis, we will study some new ways of looking at online problems. First, we study the online computation in the recourse model, in which the irrevocability on online decisions is relaxed. In other words, the online algorithm is allowed to go back and change previously made decisions. More precisely, we show how to identify the trade-off between the number of re-optimization and the performance of online algorithms for the online maximum matching problem. Moreover, we study measures other than competitive analysis for evaluating the performance of online algorithms. We observe that sometimes, competitive analysis cannot distinguish the performance of different algorithms due to the worst-case nature of the competitive ratio. We demonstrate that a similar situation arises in the linear search problem. More precisely, we revisit the linear search problem and introduce a measure, which can be applied as a refinement of the competitive ratio. Last, we study the online computation in the advice model, in which the algorithm receives as input not only a sequence of requests, but also some advice on the request sequence. Specifically, we study a recent model with untrusted advice, in which the advice can be either trusted or untrusted. Assume that in the latter case, the advice can be generated from a malicious source. We show how to identify a Pareto optimal strategy for the online bidding problem in the untrusted advice model
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Alinia, Bahram. "Optimal resource allocation strategies for electric vehicles in smart grids". Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0012/document.

Texto completo
Resumen
Avec les préoccupations environnementales croissantes liées aux émissions de carbone et la chute rapide des prix des batteries, la part de marché des véhicules électriques (EV) augmente rapidement. Le nombre croissant de EV ainsi que les progrès sans précédent dans la capacité de la batterie et de la technologie entraîne une augmentation drastique de la demande totale d'énergie destinée aux véhicules électriques. Cette forte demande de charge rend complexe le problème de planification de la charge. Même en prenant avantage de la propriété reportable des demandes de charge et d'une planification adéquate, la demande globale pourrait dépasser le taux de charge tolérable des stations, étant donné les contraintes physiques des dispositifs de charge et des transformateurs. Le principal défi est la nécessité de concevoir des solutions en ligne puisque, dans la pratique, l'ordonnanceur ne dispose d'aucune information sur les arrivées futures d'EV. Cette thèse étudie le problème d'ordonnancement des EV en ligne et fournit trois contributions principales. Premièrement, nous démontrons que le problème classique de la programmation en ligne des tâches sensibles aux échéances avec des valeurs partielles est similaire au problème d'ordonnancement EV et étudions l'extension de la programmation des charges EV en prenant en compte de la limite de traitement des travaux. Le problème réside dans la catégorie des problèmes d'ordonnancement en ligne couplés dans le temps sans disponibilité d'informations futures. Le premier algorithme proposé est déterministe, tandis que le second est randomisé et bénéficie d'une complexité de calcul plus faible. Deuxièmement, nous formulons un problème de maximisation du bien-être social pour la planification de la charge des EV avec une contrainte de capacité de charge. Nous avons conçu des algorithmes d'ordonnancement de charge qui non seulement fonctionnent dans un scénario en ligne, mais aussi qui répondent aux deux principaux défis suivants : (i) fournir un engagement à l'arrivée ; (ii) garantir la résistance aux stratégies (de groupe). Des simulations approfondies utilisant des traces réelles démontrent l'efficacité de nos algorithmes d'ordonnancement en ligne par rapport à la solution hors-ligne optimale non-engagée. La troisième contribution concerne la planification en ligne des véhicules électriques dans un réseau de recharge adaptatif (ACN) avec des contraintes de pics locaux et globaux. Nous avons conçu un algorithme d'ordonnancement primal-dual de faible complexité qui atteint un rapport d'approximation borné. Des résultats expérimentaux détaillés basés sur des traces montrent que les performances des algorithmes en ligne proposés sont proches de l'optimum hors ligne et surpassent les solutions existantes
With the increased environmental concerns related to carbon emission, and rapid drop in battery prices (e.g., 35% drop in 2017), the market share of Electric Vehicles (EVs) is rapidly growing. The growing number of EVs along with the unprecedented advances in battery capacity and technology results in drastic increase in the total energy demand of EVs. This large charging demand makes the EV charging scheduling problem challenging. The critical challenge is the need for online solution design since in practical scenario the scheduler has no information of future arrivals of EVs in a time-coupled underlying problem. This thesis studies online EV scheduling problem and provides three main contributions. First, we demonstrate that the classical problem of online scheduling of deadlinesensitive jobs with partial values is similar to the EV scheduling problem and study the extension to EV charging scheduling by taking into account the processing rate limit of jobs as an additional constraint to the original problem. The problem lies in the category of time-coupled online scheduling problems without availability of future information. Using competitive ratio, as a well-established performance metric, two online algorithms, both of which are shown to be (2 − 1/U)-competitive are proposed, where U is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. Second, we formulate a social welfare maximization problem for EV charging scheduling with charging capacity constraint. We devise charging scheduling algorithms that not only work in online scenario, but also they address the following two key challenges: (i) to provide on-arrival commitment; respecting the capacity constraint may hinder fulfilling charging requirement of deadline-constrained EVs entirely. Therefore, committing a guaranteed charging amount upon arrival of each EV is highly required; (ii) to guarantee (group)-strategy-proofness as a salient feature to promote EVs to reveal their true type and do not collude with other EVs. Third, we tackle online scheduling of EVs in an adaptive charging network (ACN) with local and global peak constraints. Two alternatives in resource-limited scenarios are to maximize the social welfare by partially charging the EVs (fractional model) or selecting a subset of EVs and fully charge them (integral model). For the fractional model, both offline and online algorithms are devised. We prove that the offline algorithm is optimal. We prove the online algorithm achieves a competitive ratio of 2. The integral model, however, is more challenging since the underlying problem is NP-hard due to 0/1 selection criteria of EVs. Hence, efficient solution design is challenging even in offline setting. We devise a low-complexity primal-dual scheduling algorithm that achieves a bounded approximation ratio. Built upon the offline approximate algorithm, we propose an online algorithm and analyze its competitive ratio in special cases
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Alinia, Bahram. "Optimal resource allocation strategies for electric vehicles in smart grids". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0012.

Texto completo
Resumen
Avec les préoccupations environnementales croissantes liées aux émissions de carbone et la chute rapide des prix des batteries, la part de marché des véhicules électriques (EV) augmente rapidement. Le nombre croissant de EV ainsi que les progrès sans précédent dans la capacité de la batterie et de la technologie entraîne une augmentation drastique de la demande totale d'énergie destinée aux véhicules électriques. Cette forte demande de charge rend complexe le problème de planification de la charge. Même en prenant avantage de la propriété reportable des demandes de charge et d'une planification adéquate, la demande globale pourrait dépasser le taux de charge tolérable des stations, étant donné les contraintes physiques des dispositifs de charge et des transformateurs. Le principal défi est la nécessité de concevoir des solutions en ligne puisque, dans la pratique, l'ordonnanceur ne dispose d'aucune information sur les arrivées futures d'EV. Cette thèse étudie le problème d'ordonnancement des EV en ligne et fournit trois contributions principales. Premièrement, nous démontrons que le problème classique de la programmation en ligne des tâches sensibles aux échéances avec des valeurs partielles est similaire au problème d'ordonnancement EV et étudions l'extension de la programmation des charges EV en prenant en compte de la limite de traitement des travaux. Le problème réside dans la catégorie des problèmes d'ordonnancement en ligne couplés dans le temps sans disponibilité d'informations futures. Le premier algorithme proposé est déterministe, tandis que le second est randomisé et bénéficie d'une complexité de calcul plus faible. Deuxièmement, nous formulons un problème de maximisation du bien-être social pour la planification de la charge des EV avec une contrainte de capacité de charge. Nous avons conçu des algorithmes d'ordonnancement de charge qui non seulement fonctionnent dans un scénario en ligne, mais aussi qui répondent aux deux principaux défis suivants : (i) fournir un engagement à l'arrivée ; (ii) garantir la résistance aux stratégies (de groupe). Des simulations approfondies utilisant des traces réelles démontrent l'efficacité de nos algorithmes d'ordonnancement en ligne par rapport à la solution hors-ligne optimale non-engagée. La troisième contribution concerne la planification en ligne des véhicules électriques dans un réseau de recharge adaptatif (ACN) avec des contraintes de pics locaux et globaux. Nous avons conçu un algorithme d'ordonnancement primal-dual de faible complexité qui atteint un rapport d'approximation borné. Des résultats expérimentaux détaillés basés sur des traces montrent que les performances des algorithmes en ligne proposés sont proches de l'optimum hors ligne et surpassent les solutions existantes
With the increased environmental concerns related to carbon emission, and rapid drop in battery prices (e.g., 35% drop in 2017), the market share of Electric Vehicles (EVs) is rapidly growing. The growing number of EVs along with the unprecedented advances in battery capacity and technology results in drastic increase in the total energy demand of EVs. This large charging demand makes the EV charging scheduling problem challenging. The critical challenge is the need for online solution design since in practical scenario the scheduler has no information of future arrivals of EVs in a time-coupled underlying problem. This thesis studies online EV scheduling problem and provides three main contributions. First, we demonstrate that the classical problem of online scheduling of deadlinesensitive jobs with partial values is similar to the EV scheduling problem and study the extension to EV charging scheduling by taking into account the processing rate limit of jobs as an additional constraint to the original problem. The problem lies in the category of time-coupled online scheduling problems without availability of future information. Using competitive ratio, as a well-established performance metric, two online algorithms, both of which are shown to be (2 − 1/U)-competitive are proposed, where U is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. Second, we formulate a social welfare maximization problem for EV charging scheduling with charging capacity constraint. We devise charging scheduling algorithms that not only work in online scenario, but also they address the following two key challenges: (i) to provide on-arrival commitment; respecting the capacity constraint may hinder fulfilling charging requirement of deadline-constrained EVs entirely. Therefore, committing a guaranteed charging amount upon arrival of each EV is highly required; (ii) to guarantee (group)-strategy-proofness as a salient feature to promote EVs to reveal their true type and do not collude with other EVs. Third, we tackle online scheduling of EVs in an adaptive charging network (ACN) with local and global peak constraints. Two alternatives in resource-limited scenarios are to maximize the social welfare by partially charging the EVs (fractional model) or selecting a subset of EVs and fully charge them (integral model). For the fractional model, both offline and online algorithms are devised. We prove that the offline algorithm is optimal. We prove the online algorithm achieves a competitive ratio of 2. The integral model, however, is more challenging since the underlying problem is NP-hard due to 0/1 selection criteria of EVs. Hence, efficient solution design is challenging even in offline setting. We devise a low-complexity primal-dual scheduling algorithm that achieves a bounded approximation ratio. Built upon the offline approximate algorithm, we propose an online algorithm and analyze its competitive ratio in special cases
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Adelsson, Rodi. "Prisalgoritmer – ett instrument för konkurrensbegränsande samverkan : En studie om hur användningen av algoritmer påverkar förståelsen för olika samverkansformer och tillämpningen av artikel 101(1) FEUF". Thesis, Uppsala universitet, Juridiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-393254.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Sarafraz, Yazdi Zahra. "REAL-TIME CLASSIFIER BASED ON ADAPTIVE COMPETITIVE SELF-ORGANIZING ALGORITHM". OpenSIUC, 2016. https://opensiuc.lib.siu.edu/dissertations/1284.

Texto completo
Resumen
This research proposes a novel Adaptive Competitive Self-organizing model, shortly named ACS, with applicability for real-time clustering and vector quantization. The model is designed based on sets of Ordinary Differential Equations (ODE’s) and free of any external controls. These properties make it suitable for hardware implementation and real-time applications. This classifier considers as unsupervised Neural Network (NN) since it doesn’t have any prior knowledge on input pattern’s classes. The design of this classifier is based on developing an energy function, constructed based on the sum of Lorentzian functions, where they correspond to the clusters of similar input patterns. The defined energy function is a form of Lyapunov function, and it guarantees trajectories of weights, which are labeling a set of similar input pattern will finalize in a set of isolated equilibrium points. The stability of those equilibrium points is investigated. Valleys on the surface are the representation of clusters, and they are defined with their parameters such as depth, width and vigilance parameter. These control parameters are effective on convergence speed, the accuracy of labeling, etc. We are going to study the level of effectiveness of those parameters on clustering assignments with different illustrations. All these parameters need to be dynamically adjusted in the model, resulting in the highest level of self-adjustment. To comprehend the way clustering occurs in ACS, two main processes are utilized to pattern clustering: learning and recalling. In learning phase the surface of energy modifies as sets of weights are self-adjusting themselves in a competitive manner to label/cluster an exposed input pattern on the surface of energy function, while recalling phase is as a newly exposed input pattern accommodate itself into existence similar cluster, which has the shortest Euclidean distance from it. The effectiveness of ACS model is demonstrated with implementing it on both real and artificial data sets as well as comparing with other well-known clustering methods. ACS method showed a better clustering performance in some categories and an overall comparable rendition. System dynamics is simulated with two optimizers Gradient Descent (GD) and Adaptive Momentum Gradient Descent (AMGD), in cooperation with a competition mechanism based on the Lotka-Volterra competition exclusion. Simulation results indicate the effectiveness of Adaptive Momentum Gradient Descent (AMGD) in achieving the optimal convergence speed of ACS in doing clustering assignments, in compare with GD and classical Momentum method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Scquizzato, Michele. "Paging on Complex Architectures". Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3423133.

Texto completo
Resumen
Advances in technology allow to build computer systems of ever increasing performances and capabilities. However, the effective use of such computational resources is often made difficult by the complexity of the system itself. Crucial to the performance of a computing device is the orchestration of the flow of data across the memory hierarchy. Specifically, given a fast but small memory (a cache) through which all the data that have to be processed must pass, it is necessary to establish a set of rules, then implemented by an algorithm, that define which data has to be evicted from such a memory to make room for new incoming data. The goal is that of minimizing the number of times that requested data is outside the cache (faults), since fetching data from farther levels of the memory hierarchy incurs high costs, in terms of time and also of energy. This thesis studies two generalizations of this problem, known as the paging problem. This problem is intrinsically online, as future data requests issued by a computer program are typically unknown. Motivated by the recent diffusion of multi-threaded and multi-core architectures, whereby several threads or processes can be executed simultaneously, and/or there are several processing units, and by the recent and rapidly growing interest in reducing power consumptions of computer systems, in the first part of the thesis we study a variation of paging which rewards the efficient usage of memory resources. In this problem the goal is that of minimizing a combination of both the number of faults and the cache occupancy of the process' data in fast memory. The main results of this part are two: the first is an impossibility result that indicates that, roughly speaking, online algorithms cannot compete in practice with algorithms that know in advance all the data requests issued by the process; the second is the design of an online algorithm that has almost the best performance among all the possible online algorithms. In the second part of the thesis we concentrate on the management of a cache shared among several concurrent processes. As outlined above, this has direct application in multi-threaded or multi-core architectures. In this problem the fast memory has to service a sequence of requests which is the interleaving of the requests issued by t different processes. Through its replacement decisions, the algorithm dynamically allocates the cache space among the processes, and this clearly impacts their progress. The main goal here is to minimize the time needed to complete the service of all the request sequences. We show tight lower and upper bounds on the performance of online algorithms for several variants of the problem.
Progressi tecnologici permettono la costruzione di sistemi di calcolo sempre piu' performanti. Tuttavia, l'uso efficace delle risorse di calcolo viene spesso reso difficoltoso dalla complessita' del sistema stesso. Cruciale verso l'ottenimento di buone performance e' la gestione del flusso di dati all'interno del sistema di memoria del calcolatore. In particolare, data una piccola ma veloce memoria (una cache), attraverso la quale tutti i dati che devono essere elaborati devono passare, e' necessario stabilire una serie di regole, che poi devono essere implementate da un algoritmo, che definiscono quali dati devono essere eliminati da tale memoria per fare posto a nuovi eventuali dati. L'obiettivo e' minimizzare il numero di volte che un dato viene richiesto e non si trova in cache (fault), perche' recuperare dati da memorie piu' lente e' costoso, sia in termini di tempo che di energia. Questa tesi studia due generalizzazioni di questo problema, conosciuto col nome di paging. Tale problema e' intrinsecamente online, poiche' le future richieste di dati da parte di un processo non sono tipicamente note. Motivati dalla recente diffusione di architetture di calcolo multi-threaded e multi-core, dove diversi thread o processi possono essere eseguiti simultaneamente, e/o dove sono presenti diverse unita' di calcolo, e dal recente interesse verso la riduzione del consumo energetico dei sistemi di calcolo, nella prima parte della tesi studiamo una variante del problema del paging dove viene premiato l'uso efficiente delle risorse di memoria. In questo problema l'obiettivo consiste nel minimizzare una combinazione sia del numero di fault che dell'occupazione in memoria dei dati di un processo. Due sono i risultati principali di questa parte: il primo e' un risultato di impossibilita' che indica che, nella pratica, algoritmi online non riescono competere con algoritmi che conoscono in anticipo le richieste di dati fatte dal processo; il secondo e' la definizione di un algoritmo online che ottiene all'incirca le migliori prestazioni tra tutti i possibili algoritmi online. Nella seconda parte della tesi ci concentriamo sulla gestione di una cache condivisa tra molteplici processi concorrenti. Come anticipato in precedenza, questo ha un'applicazione diretta nelle architetture multi-threaded e multi-core. In questo problema la memoria veloce deve servire una sequenza di richieste che provengono, in un certo ordine, da t diversi processi. Attreverso le scelte che opera, l'algoritmo di gestione alloca dinamicamente lo spazio disponibile in cache tra i vari processi, e questo ha un chiaro impatto sul loro avanzamento. Qui l'obiettivo principale e' minimizzare il tempo necessario alla completa esecuzione dei processi. Dimostriamo lower e upper bound stretti sulle performance raggiunte da algoritmi online per diversi varianti del problema.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Balavoine, Aurèle. "Implementation of the locally competitive algorithm on a field programmable analog array". Thesis, Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37255.

Texto completo
Resumen
Sparse approximation is an important class of optimization problem in signal and image processing applications. This thesis presents an analog solution to this problem, based on the Locally Competitive Algorithm (LCA). A Hopfield-Network-like analog system, operating on sub-threshold currents is proposed as a solution. The results of the circuit components' implementation on the RASP2.8a chip, a Field Programmable Analog Array, are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shahrazad, Mohammad. "Optimal allocation of FACTS devices in power networks using imperialist competitive algorithm (ICA)". Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11445.

Texto completo
Resumen
Due to the high energy consumption demand and restrictions in the installation of new transmission lines, using Flexible AC Transmission System (FACTS) devices is inevitable. In power system analysis, transferring high-quality power is essential. In fact, one of the important factors that has a special role in terms of efficiency and operation is maximum power transfer capability. FACTS devices are used for controlling the voltage, stability, power flow and security of transmission lines. However, it is necessary to find the optimal location for these devices in power networks. Many optimization techniques have been deployed to find the optimal location for FACTS devices in power networks. There are several varieties of FACTS devices with different characteristics that are used for different purposes. The imperialist competitive algorithm (ICA) is a recently developed optimization technique that is used widely in power systems. This study presents an approach to find the optimal location and size of FACTS devices in power networks using the imperialist competitive algorithm technique. This technique is based on human social evolution. ICA technique is a new heuristic algorithm for global optimization searches that is based on the concept of imperialistic competition. This algorithm is used for mathematical issues; it can be categorized on the same level as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) techniques. Also, in this study, the enhancement of voltage profile, stability and loss reduction and increasing of load-ability were investigated and carried out. In this case, to apply FACTS devices in power networks, the MATLAB program was used. Indeed, in this program all power network parameters were defined and analysed. IEEE 30-bus and IEEE 68-bus with 16 machine systems are used as a case study. All the simulation results, including voltage profile improvement and convergence characteristics, have been illustrated. The results show the advantages of the imperialist competitive algorithm technique over the conventional approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Wang, Xinchang. "Revenue management with customer choice and sellers competition". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53876.

Texto completo
Resumen
We build a variety of customer booking choice models for a major airline that operates in a very competitive origin-destination market. Some of the models are aimed at incorporating unobserved heterogeneous customer preferences for different departure times. The estimation results show that including these factors into choice models dramatically affects price sensitivity estimates, and therefore matters. We present a stochastic trust region algorithm for estimating ML-type models that involve high-dimensional integrals. The algorithm embeds two sampling processes: (i) a data sampling process and (ii) a Monte Carlo sampling process, and the algorithm dynamically controls sample sizes based on the magnitude of the errors incurred due to the two sampling processes. The first-order convergence is proved based on generalized uniform law of large numbers theories for both the average log-likelihood function and its gradient. The efficiency of the algorithm is tested with real data and compared with existing algorithms. We also study how a specific behavioral phenomenon, called the decoy effect, affects the decisions of sellers in product assortment competition in a duopoly. We propose a discrete choice model to capture decoy effects, and we provide a complete characterization of the Nash equilibria and their dependence on choice model parameters. For the cases in which there are multiple equilibria, we consider dynamical systems models of the sellers responding to their competitors using Cournot adjustment or fictitious play to study the evolution of the assortment competition and the stability of the equilibria. We provide a simple geometric characterization of the dynamics of fictitious play for 2×2 games that is more complete than previous characterizations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Luo, Lingzhi. "Distributed Algorithm Design for Constrained Multi-robot Task Assignment". Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/426.

Texto completo
Resumen
The task assignment problem is one of the fundamental combinatorial optimization problems. It has been extensively studied in operation research, management science, computer science and robotics. Task assignment problems arise in various applications of multi-robot systems (MRS), such as environmental monitoring, disaster response, extraterrestrial exploration, sensing data collection and collaborative autonomous manufacturing. In these MRS applications, there are realistic constraints on robots and tasks that must be taken into account both from the modeling perspective and the algorithmic perspective. From the modeling aspect, such constraints include (a) Task group constraints: where tasks form disjoint groups and each robot can be assigned to at most one task in each group. One example of the group constraints comes from tightly-coupled tasks, where multiple micro tasks form one tightly-coupled macro task and need multiple robots to perform each simultaneously. (b) Task deadline constraints: where tasks must be assigned to meet their deadlines. (c) Dynamically-arising tasks: where tasks arrive dynamically and the payoffs of future tasks are unknown. Such tasks arise in scenarios like searchrescue, where new victims are found dynamically. (d) Robot budget constraints: where the number of tasks each robot can perform is bounded according to the resource it possesses (e.g., energy). From the solution aspect, there is often a need for decentralized solution that are implemented on individual robots, especially when no powerful centralized controller exists or when the system needs to avoid single-point failure or be adaptive to environmental changes. Most existing algorithms either do not consider the above constraints in problem modeling, are centralized or do not provide formal performance guarantees. In this thesis, I propose methods to address these issues for two classes of problems, namely, the constrained linear assignment problem and constrained generalized assignment problem. Constrained linear assignment problem belongs to P, while constrained generalized assignment problem is NP-hard. I develop decomposition-based distributed auction algorithms with performance guarantees for both problem classes. The multi-robot assignment problem is decomposed into an optimization problem for each robot and each robot iteratively solving its own optimization problem leads to a provably good solution to the overall problem. For constrained linear assignment problem, my approaches provides an almost optimal solution. For constrained generalized assignment problem, I present a distributed algorithm that provides a solution within a constant factor of the optimal solution. I also study the online version of the task allocation problem with task group constraints. For the online problem, I prove that a repeated greedy version of my algorithm gives solution with constant factor competitive ratio. I include simulation results to evaluate the average-case performance of the proposed algorithms. I also include results on multi-robot cooperative package transport to illustrate the approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Navarro, Marco Vin?cius Monteiro. "Emprego de redes neurais artificiais supervisionadas e n?o supervisionadas no estudo de par?metros reol?gicos de excipientes farmac?uticos s?lidos". Universidade Federal do Rio Grande do Norte, 2014. http://repositorio.ufrn.br:8080/jspui/handle/123456789/13866.

Texto completo
Resumen
Made available in DSpace on 2014-12-17T14:25:22Z (GMT). No. of bitstreams: 1 MarcoVMN_TESE.pdf: 3982733 bytes, checksum: 381ae79721c75a30e3373fe4487512c7 (MD5) Previous issue date: 2014-02-05
In this paper artificial neural network (ANN) based on supervised and unsupervised algorithms were investigated for use in the study of rheological parameters of solid pharmaceutical excipients, in order to develop computational tools for manufacturing solid dosage forms. Among four supervised neural networks investigated, the best learning performance was achieved by a feedfoward multilayer perceptron whose architectures was composed by eight neurons in the input layer, sixteen neurons in the hidden layer and one neuron in the output layer. Learning and predictive performance relative to repose angle was poor while to Carr index and Hausner ratio (CI and HR, respectively) showed very good fitting capacity and learning, therefore HR and CI were considered suitable descriptors for the next stage of development of supervised ANNs. Clustering capacity was evaluated for five unsupervised strategies. Network based on purely unsupervised competitive strategies, classic "Winner-Take-All", "Frequency-Sensitive Competitive Learning" and "Rival-Penalize Competitive Learning" (WTA, FSCL and RPCL, respectively) were able to perform clustering from database, however this classification was very poor, showing severe classification errors by grouping data with conflicting properties into the same cluster or even the same neuron. On the other hand it could not be established what was the criteria adopted by the neural network for those clustering. Self-Organizing Maps (SOM) and Neural Gas (NG) networks showed better clustering capacity. Both have recognized the two major groupings of data corresponding to lactose (LAC) and cellulose (CEL). However, SOM showed some errors in classify data from minority excipients, magnesium stearate (EMG) , talc (TLC) and attapulgite (ATP). NG network in turn performed a very consistent classification of data and solve the misclassification of SOM, being the most appropriate network for classifying data of the study. The use of NG network in pharmaceutical technology was still unpublished. NG therefore has great potential for use in the development of software for use in automated classification systems of pharmaceutical powders and as a new tool for mining and clustering data in drug development
Neste trabalho foram estudadas redes neurais artificiais (RNAs) baseadas em algoritmos supervisionados e n?o supervisionados para emprego no estudo de par?metros reol?gicos de excipientes farmac?uticos s?lidos, visando desenvolver ferramentas computacionais para o desenvolvimento de formas farmac?uticas s?lidas. Foram estudadas quatro redes neurais artificiais supervisionadas e cinco n?o supervisionadas. Todas as RNAs supervisionadas foram baseadas em arquitetura de rede perceptron multicamada alimentada ? frente (feedfoward MLP). Das cinco RNAs n?o supervisionadas, tr?s foram baseadas em estrat?gias puramente competitivas, "Winner-Take- All" cl?ssica, "Frequency-Sensitive Competitive Learning" e "Rival-Penalize Competitive Learning" (WTA, FSCL e RPCL, respectivamente). As outras duas redes n?o supervisionadas, Self- Organizing Map e Neural Gas (SOM e NG) foram baseadas estrat?gias competitivo-cooperativas. O emprego da rede NG em tecnologia farmac?utica ? ainda in?dito e pretende-se avaliar seu potencial de emprego como nova ferramenta de minera??o e classifica??o de dados no desenvolvimento de medicamentos. Entre os prot?tipos de RNAs supervisionadas o melhor desempenho foi conseguido com uma rede de arquitetura composta por 8 neur?nios de entrada, 16 neur?nios escondidos e 1 neur?nio de sa?da. O aprendizado de rede e a capacidade preditiva em rela??o ao ?ngulo de repouso (α) foi deficiente, e muito boa para o ?ndice de Carr e fator de Hausner (IC, FH). Por esse motivo IC e FH foram considerados bons descritores para uma pr?xima etapa de desenvolvimento das RNAs supervisionadas. As redes, WTA, RPCL e FSCL, foram capazes de estabelecer agrupamentos dentro da massa de dados, por?m apresentaram erros grosseiros de classifica??o caracterizados pelo agrupamento de dados com propriedades conflitantes, e tamb?m n?o foi poss?vel estabelecer qual o crit?rio de classifica??o adotado. Tais resultados demonstraram a inviabilidade pr?tica dessas redes para os sistemas estudados sob nossas condi??es experimentais. As redes SOM e NG mostraram uma capacidade de classifica??o muito superior ?s RNAs puramente competitivas. Ambas as redes reconheceram os dois agrupamentos principais de dados correspondentes ? lactose (LAC) e celulose (CEL). Entretanto a rede som demonstrou defici?ncia na classifica??o de dados relativos aos excipientes minorit?rios, estearato de magn?sio (EMG), talco (TLC) e atapulgita (ATP). A rede NG, por sua vez, estabeleceu uma classifica??o muito consistente dos dados e resolveu o erro de classifica??o apresentados pela rede SOM, mostrando-se a rede mais adequada para a classifica??o dos dado do presente estudo. A rede Neural Gas, portanto, mostrou- se promissora para o desenvolvimento de softwares para uso na classifica??o automatizada de sistemas pulverulentos farmac?uticos
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Wordsworth, John. "Winnerless competition in neural dynamics : cluster synchronisation of coupled oscillators". Thesis, University of Exeter, 2009. http://hdl.handle.net/10036/87314.

Texto completo
Resumen
Systems of globally coupled phase oscillators can have robust attractors that are heteroclinic networks. Such a heteroclinic network is generated, where the phases cluster into three groups, within a specific regime of parameters when the phase oscillators are globally coupled using the function $g(\varphi) = -\sin(\varphi + \alpha) + r \sin(2\varphi + \beta)$. The resulting network switches between 30 partially synchronised states for a system of $N=5$ oscillators. Considering the states that are visited and the time spent at those states a spatio-temporal code can be generated for a given navigation around the network. We explore this phenomenon further by investigating the effect that noise has on the system, how this system can be used to generate a spatio-temporal code derived from specific inputs and how observation of a spatio-temporal code can be used to determine the inputs that were presented to the system to generate a given coding. We show that it is possible to find chaotic attractors for certain parameters and that it is possible to detail a genetic algorithm that can find the parameters required to generate a specific spatio-temporal code, even in the presence of noise. In closing we briefly explore the dynamics where $N>5$ and discuss this work in relation to winnerless competition.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Bergé, Pierre. "Algorithmes pour voyager sur un graphe contenant des blocages". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS480.

Texto completo
Resumen
Nous étudions des problèmes NP-difficiles portant sur les graphes contenant des blocages.Nous traitons les problèmes de coupes du point de vue de la complexité paramétrée. La taille p de la coupe est le paramètre. Étant donné un ensemble de sources {s1,...,sk} et une cible t, nous proposons un algorithme qui construit une coupe de taille au plus p séparant au moins r sources de t. Nous nommons ce problème NP-complet Partial One-Target Cut. Notre algorithme est FPT. Nous prouvons également que la variante de Partial One-Target Cut, où la coupe est composée de noeuds, est W[1]-difficile. Notre seconde contribution est la construction d'un algorithme qui compte les coupes minimums entre deux ensembles S et T en temps $2^{O(plog p)}n^{O(1)}$.Nous présentons ensuite plusieurs résultats sur le ratio de compétitivité des stratégies déterministes et randomisées pour le problème du voyageur canadien.Nous prouvons que les stratégies randomisées n'utilisant pas de mémoire ne peuvent pas améliorer le ratio 2k+1. Nous apportons également des éléments concernant les bornes inférieures de compétitivité de l'ensemble des stratégies randomisées. Puis, nous étudions la compétitivité en distance d'un groupe de voyageurs avec et sans communication. Enfin, nous nous penchons sur la compétitivité des stratégies déterministes pour certaines familles de graphes. Deux stratégies, avec un ratio inférieur à 2k+1 sont proposées: une pour les graphes cordaux avec poids uniformes et l'autre pour les graphes où la taille de la plus grande coupe minimale séparant s et t est au plus k
We study NP-hard problems on graphs with blockages seen as models of networks which are exposed to risk of failures.We treat cut problems via the parameterized complexity framework. The cutset size p is taken as a parameter. Given a set of sources {s1,...,sk} and a target $t, we propose an algorithm which builds a small edge cut of size p separating at least r sources from t. This NP-complete problem is called Partial One-Target Cut. It belongs to the family of multiterminal cut problems. Our algorithm is fixed-parameter tractable (FPT) as its execution takes $2^{O(p^2)}n^{O(1)}$. We prove that the vertex version of this problem, which imposes cuts to contain vertices instead of edges, is W[1]-hard. Then, we design an FPT algorithm which counts the minimum vertex (S,T)-cuts of an undirected graph in time $2^{O(plog p)}n^{O(1)}$.We provide numerous results on the competitive ratio of both deterministic and randomized strategies for the Canadian Traveller Problem. The optimal ratio obtained for the deterministic strategies on general graphs is 2k+1, where k is a given upper bound on the number of blockages. We show that randomized strategies which do not use memory cannot improve the bound 2k+1. In addition, we discuss the tightness of lower bounds on the competitiveness of randomized strategies. The distance competitive ratio for a group of travellers possibly equipped with telecommunication devices is studied. Eventually, a strategy dedicated to equal-weight chordal graphs is proposed while another one is built for graphs with small maximum (s,t)-cuts. Both strategies outperform the ratio 2k+1
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Raczynski, Christopher Michael. "A methodology for comprehensive strategic planning and program prioritization". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24622.

Texto completo
Resumen
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: Costello, Mark; Committee Member: Kirby, Michelle; Committee Member: Schrage, Daniel
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Krämer, Jan. "Service bundling and quality competition on converging communications markets A game-theoretic analysis = Calibration of new flavor tagging algorithms using Bs oscillations /". [S.l. : s.n.], 2007. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000007394.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

VIALA, JEAN-RENAUD. "Apprentissage de reseaux de neurones en couches par la regle de retropropagation du gradient : developpement d'un algorithme competitif pour la compression et la segmentation d'images". Paris 6, 1990. http://www.theses.fr/1990PA066802.

Texto completo
Resumen
Cette these aborde les differents aspects des reseaux de neurones en couches appeles perceptrons multicouches, depuis l'algorithme d'apprentissage, jusqu'a l'application a des problemes reels. La premiere partie est une etude parametrique de la regle d'apprentissage par retropropagation du gradient de l'erreur. Le gain et le temps de relaxation sont etudies lors de l'elaboration d'une loi d'echelle d'une fonction booleenne: la parite. Les architectures de reseau et les ensembles d'apprentissage sont analyses sur le probleme continu de la compression d'une image digitalisee. La deuxieme partie de la these est une application des reseaux de neurones en couches au probleme de la compression de signal video. Pour obtenir une qualite d'image satisfaisante, un nouvel algorithme a ete developpe. Son originalite est de mettre en competition, durant le codage, plusieurs reseaux en couches. Grace a l'apprentissage, qui associe chaque perceptron a une texture specifique, l'algorithme competitif adapte le codage d'un bloc de l'image en fonction de ses caracteristiques. Les simulations sur une sequence animee donnent des performances proches de celles obtenues par un algorithme non neuronal de compression par transformee en cosinus discrets. Les segmentations delivrees sont proches des objets figurant sur l'image. Cet algorithme peut etre implante en temps reel sur des circuits neuromimetiques elabores au laboratoire d'electronique philips
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Repík, Tomáš. "Evoluční návrh využívající gramatickou evoluci". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363805.

Texto completo
Resumen
p, li { white-space: pre-wrap; } Evoluce v přírodě slouží jako zdroj inspirace pro tuto práci . Základní myšlenkou je využití generativní síly gramatik v kombinaci s evolučním přístupem . Nabyté znalosti jsou aplikovány na hledání strategií chování v rozmanitých prostředích . Stromy chování jsou modelem , který bývá běžně použit na řízení rozhodování různých umělých inteligencí . Tato práce se zabývá hledáním stromů chování , které budou řídit jedince řešící nasledující dva problémy : upravenou verzi problému cesty koněm šachovnicí a hraní hry Pirátské kostky . Při hledání strategie hráče kostek , byla použita konkurenční koevoluce . Důvodem je obtížnost návrhu spravedlivé fitness funkce hodnotící výkony hráčů .
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Shapero, Samuel Andre. "Configurable analog hardware for neuromorphic Bayesian inference and least-squares solutions". Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/51719.

Texto completo
Resumen
Sparse approximation is a Bayesian inference program with a wide number of signal processing applications, such as Compressed Sensing recovery used in medical imaging. Previous sparse coding implementations relied on digital algorithms whose power consumption and performance scale poorly with problem size, rendering them unsuitable for portable applications, and a bottleneck in high speed applications. A novel analog architecture, implementing the Locally Competitive Algorithm (LCA), was designed and programmed onto a Field Programmable Analog Arrays (FPAAs), using floating gate transistors to set the analog parameters. A network of 6 coefficients was demonstrated to converge to similar values as a digital sparse approximation algorithm, but with better power and performance scaling. A rate encoded spiking algorithm was then developed, which was shown to converge to similar values as the LCA. A second novel architecture was designed and programmed on an FPAA implementing the spiking version of the LCA with integrate and fire neurons. A network of 18 neurons converged on similar values as a digital sparse approximation algorithm, with even better performance and power efficiency than the non-spiking network. Novel algorithms were created to increase floating gate programming speed by more than two orders of magnitude, and reduce programming error from device mismatch. A new FPAA chip was designed and tested which allowed for rapid interfacing and additional improvements in accuracy. Finally, a neuromorphic chip was designed, containing 400 integrate and fire neurons, and capable of converging on a sparse approximation solution in 10 microseconds, over 1000 times faster than the best digital solution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Afshar, Yaser. "Parallel distributed-memory particle methods for acquisition-rate segmentation and uncertainty quantifications of large fluorescence microscopy images". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-213157.

Texto completo
Resumen
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. Another issue is the information loss during image acquisition due to limitations of the optical imaging systems. Analysis of the acquired images may, therefore, find multiple solutions (or no solution) due to imaging noise, blurring, and other uncertainties introduced during image acquisition. In this thesis, we address the computational processing time and memory issues by developing a distributed parallel algorithm for segmentation of large fluorescence-microscopy images. The method is based on the versatile Discrete Region Competition (Cardinale et al., 2012) algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collective solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10^10 pixels) but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data inspection and interactive experiments. Second, we estimate the segmentation uncertainty on large images that do not fit the main memory of a single computer. We there- fore develop a distributed parallel algorithm for efficient Markov- chain Monte Carlo Discrete Region Sampling (Cardinale, 2013). The parallel algorithm provides a measure of segmentation uncertainty in a statistically unbiased way. It approximates the posterior probability densities over the high-dimensional space of segmentations around the previously found segmentation
Moderne Fluoreszenzmikroskopie, wie zum Beispiel Lichtblattmikroskopie, erlauben die Aufnahme hochaufgelöster, 3-dimensionaler Bilder. Dies führt zu einen Engpass bei der Bearbeitung und Analyse der aufgenommenen Bilder, da die Aufnahmerate die Datenverarbeitungsrate übersteigt. Zusätzlich können diese Bilder so groß sein, dass sie die Speicherkapazität eines einzelnen Computers überschreiten. Hinzu kommt der aus Limitierungen des optischen Abbildungssystems resultierende Informationsverlust während der Bildaufnahme. Bildrauschen, Unschärfe und andere Messunsicherheiten können dazu führen, dass Analysealgorithmen möglicherweise mehrere oder keine Lösung für Bildverarbeitungsaufgaben finden. Im Rahmen der vorliegenden Arbeit entwickeln wir einen verteilten, parallelen Algorithmus für die Segmentierung von speicherintensiven Fluoreszenzmikroskopie-Bildern. Diese Methode basiert auf dem vielseitigen "Discrete Region Competition" Algorithmus (Cardinale et al., 2012), der sich bereits in anderen Anwendungen als nützlich für die Segmentierung von Mikroskopie-Bildern erwiesen hat. Das hier präsentierte Verfahren unterteilt das Eingangsbild in kleinere Unterbilder, welche auf die Speicher mehrerer Computer verteilt werden. Die Koordinierung des globalen Segmentierungsproblems wird durch die Benutzung von Netzwerkkommunikation erreicht. Dies erlaubt die Segmentierung von sehr großen Bildern, wobei wir die Anwendung des Algorithmus auf Bildern mit bis zu 10^10 Pixeln demonstrieren. Zusätzlich wird die Segmentierungsgeschwindigkeit erhöht und damit vergleichbar mit der Aufnahmerate des Mikroskops. Dies ist eine Grundvoraussetzung für die intelligenten Mikroskope der Zukunft, und es erlaubt die Online-Betrachtung der aufgenommenen Daten, sowie interaktive Experimente. Wir bestimmen die Unsicherheit des Segmentierungsalgorithmus bei der Anwendung auf Bilder, deren Größe den Speicher eines einzelnen Computers übersteigen. Dazu entwickeln wir einen verteilten, parallelen Algorithmus für effizientes Markov-chain Monte Carlo "Discrete Region Sampling" (Cardinale, 2013). Dieser Algorithmus quantifiziert die Segmentierungsunsicherheit statistisch erwartungstreu. Dazu wird die A-posteriori-Wahrscheinlichkeitsdichte über den hochdimensionalen Raum der Segmentierungen in der Umgebung der zuvor gefundenen Segmentierung approximiert
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Walter, Igor Alexandre. "Sistemas multiagentes em mercados de energia elétrica/". [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260769.

Texto completo
Resumen
Orientador: Fernando Antônio Campos Gomide
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T03:39:08Z (GMT). No. of bitstreams: 1 Walter_IgorAlexandre_D.pdf: 1762436 bytes, checksum: 257485271a6580f86b0b466799ceff14 (MD5) Previous issue date: 2010
Resumo: Sugerimos uma abordagem evolutiva para o projeto de estratégias de interação em sistemas multiagentes, especialmente estratégias de oferta modeladas como sistemas baseados em regras nebulosas. O objetivo é a aprendizagem das estratégias de oferta em leilões em modelos em que a base de conhecimento sofre evolução para melhorar o desempenho dos agentes atuando em um ambiente competitivo. Dados para aprendizagem e otimização das estratégias são raros em ambientes competitivos como os leilões. Introduzimos um modelo de sistema genético fuzzy (GFS) cujos operadores genéticos utilizam uma representação de tamanho variável do cromossomo e uma relação hierárquica estabelecida através do fitness dos indivíduos, em um esquema que explora e explota o espaço de busca ao longo das gerações. A evolução de estratégias de interação permite a descoberta de comportamentos dos agentes previamente desconhecidos e inesperados, permitindouma análise mais rica dos mecanismos de negociação e seu papel como protocolo de coordenação. A aplicação da abordagem proposta no mercado de energia elétrica permite a simulação destes mercados através da evolução de estratégias de oferta (bidding) em leilões de energia. A reestruturação destes mercados nas economias contemporâneas apresenta novos desafios e oportunidades, uma vez que não há consenso sobre qual seria sua melhor organização. A evolução da estrutura organizacional destes mercados salienta a falta de discernimento sobre as principais questões a serem analisadas e levadas em consideração. Argumenta-se que a abordagem econômica neoclássica se mostra limitada na análise dos efeitos da reestruturação e no estudo do comportamento dos agentes econômicos competindo nos mercados de energia elétrica reestruturados. Apresentamos uma arquitetura computacional inspirada na Economia Computacional baseada em Agentes que permite a modelagem, estudo e simulação destes mercados. Aplicamos ferramentas de inteligência computacional adequadas à concepção dos agentes participantes nos mercados de energia e que podem ser estendidas a outros mecanismos de mercado e negociação. Os mercados de energia elétrica são sistemas complexos habitados por agentes econômicos com interesse próprio que interagem entre si. Concluímos que é natural modelar e simular estes mercados como sistemas multiagentes. A evolução de estratégias de oferta permite a descoberta de comportamentos que auxiliam na tomada de decisão de um participante e na avaliação do mecanismo de negociação por parte de seus projetistas
Abstract: We suggest an evolutionary approach to design interaction strategies for multiagent systems, focusing on strategies modeled as fuzzy rule-based systems. The aim is to learn models represented by evolving knowledge bases to achieve agents' performance improvement when playing in a competitive environment. In competitive situations data for learning and tuning are rare and rule bases must jointly evolve with the databases. We introduce an evolutionary algorithm whose operators use variable length chromosome, a hierarchical relationship among individuals through fitness, and a scheme that successively explores and exploits the search space along generations. Evolution of interaction strategies uncovers unknown and unexpected agent behaviors and allows a richer analysis of negotiation mechanisms and their role as a coordination protocol. An application concerning an electricity market illustrates the effectiveness of the approach and allows to simulate the market through evolutionary bidding strategies. The restructuring process of power markets raises new challenges and opportunities, since there is no consensual market architecture. The evolution of the power industry organization shows a lack of insight about the issues to be addressed and taken into account. Several authors have considered the available tools based on the neoclassical economics theory a limited approach to analyze the effects of the industry restructuring and to study economical agents behavior participating in a restructured electricity market. We present Artificial Economy Multiagent System (AEMAS), a computational architecture inspired on Agent-based Computational Economics (ACE) that allows to model, study and simulate a power market. We apply Computational Intelligence tools to conceive the market agents that we expect could be extended to other negotiation environments. A power market is a complex system populated by self interested economical agents that interact. We conclude that it is feasible to model and simulate these markets on a multiagent system based approach. The evolution of bidding strategies allows to uncover new and unexpected behaviors that help to address the negotiation mechanism analysis by its designers and to support a market player decision process
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Falade, Joannes Chiderlos. "Identification rapide d'empreintes digitales, robuste à la dissimulation d'identité". Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC231.

Texto completo
Resumen
La biométrie est de plus en plus utilisée à des fins d’identification compte tenu de la relation étroite entre la personne et son identifiant (comme une empreinte digitale). Nous positionnons cette thèse sur la problématique de l’identification d’individus à partir de ses empreintes digitales. L’empreinte digitale est une donnée biométrique largement utilisée pour son efficacité, sa simplicité et son coût d’acquisition modeste. Les algorithmes de comparaison d’empreintes digitales sont matures et permettent d’obtenir en moins de 500 ms un score de similarité entre un gabarit de référence (stocké sur un passeport électronique ou une base de données) et un gabarit acquis. Cependant, il devient très important de déterminer l'identité d'un individu contre une population entière en un temps très court (quelques secondes). Ceci représente un enjeu important compte tenu de la taille de la base de données biométriques (contenant un ensemble d’individus de l’ordre d’un pays). Par exemple, avant de délivrer un nouveau passeport à un individu qui en fait la demande, il faut faire une recherche d'identification sur la base des données biométriques du pays afin de s'assurer que ce dernier n'en possède pas déjà un autre mais avec les mêmes empreintes digitales (éviter les doublons). Ainsi, la première partie du sujet de cette thèse concerne l’identification des individus en utilisant les empreintes digitales. D’une façon générale, les systèmes biométriques ont pour rôle d’assurer les tâches de vérification (comparaison 1-1) et d’identification (1-N). Notre sujet se concentre sur l’identification avec N étant à l’échelle du million et représentant la population d’un pays par exemple. Dans le cadre de nos travaux, nous avons fait un état de l’art sur les méthodes d’indexation et de classification des bases de données d’empreintes digitales. Nous avons privilégié les représentations binaires des empreintes digitales pour indexation. Tout d’abord, nous avons réalisé une étude bibliographique et rédigé un support sur l’état de l’art des techniques d’indexation pour la classification des empreintes digitales. Ensuite, nous avons explorer les différentes représentations des empreintes digitales, puis réaliser une prise en main et l’évaluation des outils disponibles à l’imprimerie Nationale (IN Groupe) servant à l'extraction des descripteurs représentant une empreinte digitale. En partant de ces outils de l’IN, nous avons implémenté quatre méthodes d’identification sélectionnées dans l’état de l’art. Une étude comparative ainsi que des améliorations ont été proposées sur ces méthodes. Nous avons aussi proposé une nouvelle solution d'indexation d'empreinte digitale pour réaliser la tâche d’identification qui améliore les résultats existant. Les différents résultats sont validés sur des bases de données de tailles moyennes publiques et nous utilisons le logiciel Sfinge pour réaliser le passage à l’échelle et la validation complète des stratégies d’indexation. Un deuxième aspect de cette thèse concerne la sécurité. Une personne peut avoir en effet, la volonté de dissimuler son identité et donc de mettre tout en œuvre pour faire échouer l’identification. Dans cette optique, un individu peut fournir une empreinte de mauvaise qualité (portion de l’empreinte digitale, faible contraste en appuyant peu sur le capteur…) ou fournir une empreinte digitale altérée (empreinte volontairement abîmée, suppression de l’empreinte avec de l’acide, scarification…). Il s'agit donc dans la deuxième partie de cette thèse de détecter les doigts morts et les faux doigts (silicone, impression 3D, empreinte latente) utilisés par des personnes mal intentionnées pour attaquer le système. Nous avons proposé une nouvelle solution de détection d'attaque basée sur l'utilisation de descripteurs statistiques sur l'empreinte digitale. Aussi, nous avons aussi mis en place trois chaînes de détections des faux doigts utilisant les techniques d'apprentissages profonds
Biometrics are increasingly used for identification purposes due to the close relationship between the person and their identifier (such as fingerprint). We focus this thesis on the issue of identifying individuals from their fingerprints. The fingerprint is a biometric data widely used for its efficiency, simplicity and low cost of acquisition. The fingerprint comparison algorithms are mature and it is possible to obtain in less than 500 ms a similarity score between a reference template (enrolled on an electronic passport or database) and an acquired template. However, it becomes very important to check the identity of an individual against an entire population in a very short time (a few seconds). This is an important issue due to the size of the biometric database (containing a set of individuals of the order of a country). Thus, the first part of the subject of this thesis concerns the identification of individuals using fingerprints. Our topic focuses on the identification with N being at the scale of a million and representing the population of a country for example. Then, we use classification and indexing methods to structure the biometric database and speed up the identification process. We have implemented four identification methods selected from the state of the art. A comparative study and improvements were proposed on these methods. We also proposed a new fingerprint indexing solution to perform the identification task which improves existing results. A second aspect of this thesis concerns security. A person may want to conceal their identity and therefore do everything possible to defeat the identification. With this in mind, an individual may provide a poor quality fingerprint (fingerprint portion, low contrast by lightly pressing the sensor...) or provide an altered fingerprint (impression intentionally damaged, removal of the impression with acid, scarification...). It is therefore in the second part of this thesis to detect dead fingers and spoof fingers (silicone, 3D fingerprint, latent fingerprint) used by malicious people to attack the system. In general, these methods use machine learning techniques and deep learning. Secondly, we proposed a new presentation attack detection solution based on the use of statistical descriptors on the fingerprint. Thirdly, we have also build three presentation attacks detection workflow for fake fingerprint using deep learning. Among these three deep solutions implemented, two come from the state of the art; then the third an improvement that we propose. Our solutions are tested on the LivDet competition databases for presentation attack detection
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Liang, Han-Wen y 梁漢文. "Preclassified competitive-learning algorithms for vector quantization". Thesis, 1993. http://ndltd.ncl.edu.tw/handle/59776420137974893958.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

"A probabilistic cooperative-competitive hierarchical search model". 1998. http://library.cuhk.edu.hk/record=b5889641.

Texto completo
Resumen
by Wong Yin Bun, Terence.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.
Includes bibliographical references (leaves 99-104).
Abstract also in Chinese.
List of Figures --- p.ix
List of Tables --- p.xi
Chapter I --- Preliminary --- p.1
Chapter 1 --- Introduction --- p.2
Chapter 1.1 --- Thesis themes --- p.4
Chapter 1.1.1 --- Dynamical view of landscape --- p.4
Chapter 1.1.2 --- Bottom-up self-feedback algorithm with memory --- p.4
Chapter 1.1.3 --- Cooperation and competition --- p.5
Chapter 1.1.4 --- Contributions to genetic algorithms --- p.5
Chapter 1.2 --- Thesis outline --- p.5
Chapter 1.3 --- Contribution at a glance --- p.6
Chapter 1.3.1 --- Problem --- p.6
Chapter 1.3.2 --- Approach --- p.7
Chapter 1.3.3 --- Contributions --- p.7
Chapter 2 --- Background --- p.8
Chapter 2.1 --- Iterative stochastic searching algorithms --- p.8
Chapter 2.1.1 --- The algorithm --- p.8
Chapter 2.1.2 --- Stochasticity --- p.10
Chapter 2.2 --- Fitness landscapes and its relation to neighborhood --- p.12
Chapter 2.2.1 --- Direct searching --- p.12
Chapter 2.2.2 --- Exploration and exploitation --- p.12
Chapter 2.2.3 --- Fitness landscapes --- p.13
Chapter 2.2.4 --- Neighborhood --- p.16
Chapter 2.3 --- Species formation methods --- p.17
Chapter 2.3.1 --- Crowding methods --- p.17
Chapter 2.3.2 --- Deterministic crowding --- p.18
Chapter 2.3.3 --- Sharing method --- p.18
Chapter 2.3.4 --- Dynamic niching --- p.19
Chapter 2.4 --- Summary --- p.21
Chapter II --- Probabilistic Binary Hierarchical Search --- p.22
Chapter 3 --- The basic algorithm --- p.23
Chapter 3.1 --- Introduction --- p.23
Chapter 3.2 --- Search space reduction with binary hierarchy --- p.25
Chapter 3.3 --- Search space modeling --- p.26
Chapter 3.4 --- The information processing cycle --- p.29
Chapter 3.4.1 --- Local searching agents --- p.29
Chapter 3.4.2 --- Global environment --- p.30
Chapter 3.4.3 --- Cooperative refinement and feedback --- p.33
Chapter 3.5 --- Enhancement features --- p.34
Chapter 3.5.1 --- Fitness scaling --- p.34
Chapter 3.5.2 --- Elitism --- p.35
Chapter 3.6 --- Illustration of the algorithm behavior --- p.36
Chapter 3.6.1 --- Test problem --- p.36
Chapter 3.6.2 --- Performance study --- p.38
Chapter 3.6.3 --- Benchmark tests --- p.45
Chapter 3.7 --- Discussion and analysis --- p.45
Chapter 3.7.1 --- Hierarchy of partitions --- p.45
Chapter 3.7.2 --- Availability of global information --- p.47
Chapter 3.7.3 --- Adaptation --- p.47
Chapter 3.8 --- Summary --- p.48
Chapter III --- Cooperation and Competition --- p.50
Chapter 4 --- High-dimensionality --- p.51
Chapter 4.1 --- Introduction --- p.51
Chapter 4.1.1 --- The challenge of high-dimensionality --- p.51
Chapter 4.1.2 --- Cooperation - A solution to high-dimensionality --- p.52
Chapter 4.2 --- Probabilistic Cooperative Binary Hierarchical Search --- p.52
Chapter 4.2.1 --- Decoupling --- p.52
Chapter 4.2.2 --- Cooperative fitness --- p.53
Chapter 4.2.3 --- The cooperative model --- p.54
Chapter 4.3 --- Empirical performance study --- p.56
Chapter 4.3.1 --- pBHS versus pcBHS --- p.56
Chapter 4.3.2 --- Scaling behavior of pcBHS --- p.60
Chapter 4.3.3 --- Benchmark test --- p.62
Chapter 4.4 --- Summary --- p.63
Chapter 5 --- Deception --- p.65
Chapter 5.1 --- Introduction --- p.65
Chapter 5.1.1 --- The challenge of deceptiveness --- p.65
Chapter 5.1.2 --- Competition: A solution to deception --- p.67
Chapter 5.2 --- Probabilistic cooperative-competitive binary hierarchical search --- p.67
Chapter 5.2.1 --- Overview --- p.68
Chapter 5.2.2 --- The cooperative-competitive model --- p.68
Chapter 5.3 --- Empirical performance study --- p.70
Chapter 5.3.1 --- Goldberg's deceptive function --- p.70
Chapter 5.3.2 --- "Shekel family - S5, S7, and S10" --- p.73
Chapter 5.4 --- Summary --- p.74
Chapter IV --- Finale --- p.78
Chapter 6 --- A new genetic operator --- p.79
Chapter 6.1 --- Introduction --- p.79
Chapter 6.2 --- Variants of the integration --- p.80
Chapter 6.2.1 --- Fixed-fraction-of-all --- p.83
Chapter 6.2.2 --- Fixed-fraction-of-best --- p.83
Chapter 6.2.3 --- Best-from-both --- p.84
Chapter 6.3 --- Empricial performance study --- p.84
Chapter 6.4 --- Summary --- p.88
Chapter 7 --- Conclusion and Future work --- p.89
Chapter A --- The pBHS Algorithm --- p.91
Chapter A.1 --- Overview --- p.91
Chapter A.2 --- Details --- p.91
Chapter B --- Test problems --- p.96
Bibliography --- p.99
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kuo, Cheng-Ming y 郭政銘. "Competitive Analysis of Dynamic Dictionaries and Disk Scheduling Algorithms". Thesis, 2000. http://ndltd.ncl.edu.tw/handle/33985293047201216252.

Texto completo
Resumen
碩士
國立臺灣大學
電機工程學研究所
88
In this thesis, we study two on-line problems. The first one is the on-line dictionary problem. An implementation of dynamic dictionaries based on binomial trees is provided. The move-to-front heuristic is proven to be 3-competitive for the tree update problem on our implementation. We also prove that the lower bound for on-line algorithms on this implementation is (2k+2)/(k+2), where k is the order of the tree. By the comparison of dictionaries on the list, the splay tree, and the our model, a dictionary based on the binomial tree should be much more efficient than one based on the linear list. In addition, the correspondence between the binary-digit represented keys and the structure of binomial trees perfectly supports highly convenient and efficient searching operations. These facts strongly suggest our adoption of binomial tree as the dictionary structure. In the second topic of the thesis, the on-line disk scheduling problem is discussed. We consider the following three on-line disk scheduling algorithms: FCFS, SSTF, and LOOK. We prove that the lower bounds of the competitive ratios of FCFS, SSTF, and LOOK. We also give a competitive ratio of LOOK that matches its lower bound. As our results show, the performance of LOOK, in competitive sense, is better than those of FCFS and SSTF.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Chen, Wen-Hsien y 陳文憲. "Improved Imperialist Competitive Algorithms for Compensatory Neural Fuzzy Systems". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/6ena88.

Texto completo
Resumen
碩士
國立虎尾科技大學
電機工程研究所
102
This dissertation proposes improve imperialist competitive algorithms (IICA) for compensatory neural fuzzy systems (CNFS) model. This study proposes IICA includes the bare-bone imperialistic competitive algorithm (BBICA) and united-based imperialistic competitive algorithm (UICA). This dissertation consists of the two major parts. In the first part, the proposed BBICA is to reduce parameter setting of original ICA. BBICA adopts Gaussian distribution to improve assimilation phase of ICA and prevent BBICA from falling into local optimal solution and boot the explorative ability of the global. In the second part, the UICA method is presented to balance the local and global exploration of search space effecitively. The proposed UICA method which adopts updated formulas of particle swarm optimization velocity and position introduces to the assimilation of imperialist competitive algorithm. Therefore, the assimilation policy consists of three major parts. Firstly, the colony searches for the optimal position, starting with previous spaces and continuing to the entire current space. Secondly, the colonial empire faces forward toward the location at which it belongs. The colonial empire moves all of the strongest empires forward in the third part. Finally, the proposed two algorithms for CNFS model are applied in various prediction problems. Experiment results of this dissertation demonstrate the effectiveness of the proposed algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Matos, Catarina Corrêa Mendes Correia de. "Algorithms: the end of traditional competitive markets? the case of partneo". Master's thesis, 2019. http://hdl.handle.net/10362/68140.

Texto completo
Resumen
This thesis analyses how the development of machine learning and pricing algorithms is affecting competition between undertakings by facilitating collusive behaviours and how this issue should be addressed by competition authorities. Combining the insights given by the existent literature on the topic with the analyse of the recent case of Partneo, findings suggest that algorithms are indeed changing the competitive landscape. Although current EU law can deal with some cases, others fall out of its reach. The boundaries of competition law are strongly challenged on the case of Partneo in which carmakers were able to increase their revenues by over one billion dollars by using a sophisticated pricing software. The use of this algorithm leads to a parallel price increase of spare car parts on the aftermarket, which significantly harms “lock in” customers. In addition, the use of Partneo also affects the competition on the primary market as it creates incentives for dominant firms to decrease prices to capture more market share.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Megow, Nicole y Andreas S. Schulz. "Scheduling to Minimize Average Completion Time Revisited: Deterministic On-line Algorithms". 2004. http://hdl.handle.net/1721.1/4048.

Texto completo
Resumen
We consider the scheduling problem of minimizing the average weighted completion time on identical parallel machines when jobs are arriving over time. For both the preemptive and the nonpreemptive setting, we show that straightforward extensions of Smith's ratio rule yield smaller competitive ratios compared to the previously best-known deterministic on-line algorithms, which are (4+epsilon)-competitive in either case. Our preemptive algorithm is 2-competitive, which actually meets the competitive ratio of the currently best randomized on-line algorithm for this scenario. Our nonpreemptive algorithm has a competitive ratio of 3.28. Both results are characterized by a surprisingly simple analysis; moreover, the preemptive algorithm also works in the less clairvoyant environment in which only the ratio of weight to processing time of a job becomes known at its release date, but neither its actual weight nor its processing time. In the corresponding nonpreemptive situation, every on-line algorithm has an unbounded competitive ratio
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Schäfer, Guido [Verfasser]. "Worst case instances are fragile : average case and smoothed competitive analysis of algorithms / Guido Schäfer". 2004. http://d-nb.info/972316930/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bento, Pedro Miguel Rocha. "Hybrid artificial intelligence algorithms for short-term load and price forecasting in competitive electric markets". Master's thesis, 2017. http://hdl.handle.net/10400.6/7851.

Texto completo
Resumen
The liberalization and deregulation of electric markets forced the various participants to accommodate several challenges, including: a considerable accumulation of new generation capacity from renewable sources (fundamentally wind energy), the unpredictability associated with these new forms of generation and new consumption patterns, contributing to further electricity prices volatility (e.g. the Iberian market). Given the competitive framework in which market participants operate, the existence of efficient computational forecasting techniques is a distinctive factor. Based on these forecasts a suitable bidding strategy and an effective generation systems operation planning is achieved, together with an improved installed transmission capacity exploitation, results in maximized profits, all this contributing to a better energy resources utilization. This dissertation presents a new hybrid method for load and electricity prices forecasting, for one day ahead time horizon. The optimization scheme presented in this method, combines the efforts from different techniques, notably artificial neural networks, several optimization algorithms and wavelet transform. The method’s validation was made using different real case studies. The subsequent comparison (accuracy wise) with published results, in reference journals, validated the proposed hybrid method suitability.
O processo de liberalização e desregulação dos mercados de energia elétrica, obrigou os diversos participantes a acomodar uma série de desafios, entre os quais: a acumulação considerável de nova capacidade de geração proveniente de origem renovável (fundamentalmente energia eólica), a imprevisibilidade associada a estas novas formas de geração e novos padrões de consumo. Resultando num aumento da volatilidade associada aos preços de energia elétrica (como é exemplo o mercado ibérico). Dado o quadro competitivo em que os agentes de mercado operam, a existência de técnicas computacionais de previsão eficientes, constituí um fator diferenciador. É com base nestas previsões que se definem estratégias de licitação e se efetua um planeamento da operação eficaz dos sistemas de geração que, em conjunto com um melhor aproveitamento da capacidade de transmissão instalada, permite maximizar os lucros, realizando ao mesmo tempo um melhor aproveitamento dos recursos energéticos. Esta dissertação apresenta um novo método híbrido para a previsão da carga e dos preços da energia elétrica, para um horizonte temporal a 24 horas. O método baseia-se num esquema de otimização que reúne os esforços de diferentes técnicas, nomeadamente redes neuronais artificiais, diversos algoritmos de otimização e da transformada de wavelet. A validação do método foi feita em diferentes casos de estudo reais. A posterior comparação com resultados já publicados em revistas de referência, revelou um excelente desempenho do método hibrido proposto.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Bender, Marco. "Randomized Approximation and Online Algorithms for Assignment Problems". Doctoral thesis, 2015. http://hdl.handle.net/11858/00-1735-0000-0022-6016-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Katti, Anil Kumar. "Competitive cache replacement strategies for a shared cache". Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3584.

Texto completo
Resumen
We consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches the lower bound on the competitive ratio of deterministic cache replacement algorithms when processes access disjoint sets of memory blocks. We extend our results to multiple levels of caches and prove that an exclusive cache is better than both inclusive and non-inclusive caches; this validates the experimental findings in the literature. Our results could be applied to shared caches in multicore systems in which processes work together on multithreaded computations like Gaussian elimination paradigm, fast Fourier transform, matrix multiplication, etc. In these computations, processes have full knowledge about their individual request sequences and can share memory blocks.
text
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Trotsyuk, Roman Vladimirovich. "The enterprise DNA : static and dynamic digital representation of organizations". Master's thesis, 2019. http://hdl.handle.net/10362/70662.

Texto completo
Resumen
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
The contemporary business environment become more complex and fast changing during processes of business globalization, technical progress, competition, accumulation and sharing of knowledge. Companies arise, grow, exchange ideas, merge, interact with each other and adapt to the market environment, and, finally, some of them disappear. Their behavior determined by sophisticated laws, that cannot be discovered in isolation, because existence of each entity depends on its specific, internal properties and the interplay with other participants of the market. The complex interactions and enterprise life cycle reminds the life of species in the nature and thus, the mimic of biological entities can be applied as a modelling tool for better understanding how company works and what makes it successful. The broad purpose of this work is to create foundation for applying Artificial Life simulation for business analysis. Artificial Life is a concept that allow to mimic biological evolution and behavior of living creatures for modelling complex systems, forming specific environment with interacting and evolving agents. Thus, the Artificial Life can be applied for the analysis of the enterprise on the competitive market for studying its success factors. Possible combination of factors and their value are company- specific and represent properties that affects performance of the organization. The enterprises can exchange their characteristics with others by means of stuff swap, consulting, merging etc., acquiring best practices and becoming more adapted for specific challenges. The main goal of this paper is the research and suggestion of such characteristics and their representation, which, by analogy with biology, will constitute Enterprise DNA. In this thesis the digital representation of the Enterprise DNA inspired by the biological notion of living organism’s DNA is proposed. As the foundation of important company’s features, the Enterprise Architecture concept was applied. Despite the fact, that it was previously used for an Information Technology architecture, this discipline was evolved to more broad science and became a tool for describing the business architecture of the company. The Zachman Enterprise Architecture framework used as a basis of enterprise representation. Regarding this tool, the artifacts for phenotype representation are proposed and then, their digital XML representation found. The DNA digital representation model (genotype) for artifacts is proposed, which can be used for further evaluation of fitness of the specific company to the competitive environment on the specific market. This representation can be used by means of Genetic Algorithm for further implementation of Artificial Life simulation on real company’s data. The evaluation by the experts showed perspectivity of the idea of applying ALife modelling for solving business problems. As a result, some ideas for model improvement are discovered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Park, Jisun. "From partner selection to collaboration in information sharing multi-agent systems". 2006. http://hdl.handle.net/2152/7549.

Texto completo
Resumen
This research advances distributed information sharing by equipping nodes (e.g., software agents) in a distributed network with (1) partner selection algorithms in cooperative environments, and (2) strategies for providing and requesting information in competitive environments. In cooperative environments, information providers are willing to provide requested information, but information consumers must consider uncertainty in the quality of provided information when selecting appropriate information providers. In competitive environments, if a self-interested agent can be an information consumer and provider at the same time, agents need to determine the best ways to request and provide information so that the information acquisition utility can be maximized. This research defines a set of metrics for evaluating information acquisition utility, and presents a game-theoretic approach for determining the best information sharing strategies based on stochastic games. The results show how agents build collaborative relationships with appropriate agents and how the information acquisition utility is affected by those relationships.
text
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía