Dissertations / Theses on the topic 'Approximation algorithms; resource allocation; optimization'

To see the other types of publications on this topic, follow the link: Approximation algorithms; resource allocation; optimization.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 dissertations / theses for your research on the topic 'Approximation algorithms; resource allocation; optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chakrabarty, Deeparnab. "Algorithmic aspects of connectivity, allocation and design problems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24659.

Full text
Abstract:
Thesis (Ph.D.)--Computing, Georgia Institute of Technology, 2008.
Committee Chair: Vazirani, Vijay; Committee Member: Cook, William; Committee Member: Kalai, Adam; Committee Member: Tetali, Prasad; Committee Member: Thomas, Robin
APA, Harvard, Vancouver, ISO, and other styles
2

Tripathi, Pushkar. "Allocation problems with partial information." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44789.

Full text
Abstract:
Allocation problems have been central to the development of the theory of algorithms and also find applications in several realms of computer science and economics. In this thesis we initiate a systematic study of these problems in situations with limited information. Towards this end we explore several modes by which data may be obfuscated from the algorithm. We begin by investigating temporal constraints where data is revealed to the algorithm over time. Concretely, we consider the online bipartite matching problem in the unknown distribution model and present the first algorithm that breaches the 1-1/e barrier for this problem. Next we study issues arising from data acquisition costs that are prevalent in ad-systems and kidney exchanges. Motivated by these constraints we introduce the query-commit model and present constant factor algorithms for the maximum matching and the adwords problem in this model. Finally we assess the approximability of several classical allocation problems with multiple agents having complex non-linear cost functions. This presents an additional obstacle since the support for the cost functions may be extremely large entailing oracle access. We show tight information theoretic lower bounds for the general class of submodular functions and also extend these results to get lower bounds for a subclass of succinctly representable non-linear cost functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Kibria, Mirza Golam. "Radio Resource Allocation Optimization for Cellular Wireless Networks." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/189689.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bayrak, Ahmet Engin. "Optimization Algorithms For Resource Allocation Problem Of Air Tasking Order Preparation." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612325/index.pdf.

Full text
Abstract:
In recent years, evolving technology has provided a wide range of resources for Military forces. However, that wideness also caused resource management difficulties in combat missions. Air Tasking Order (ATO) is prepared for various missions of air combats in order to reach objectives by an optimized resource management. Considering combinatorial military aspects with dynamic objectives and various constraints
computer support became inevitable for optimizing the resource management in air force operations. In this thesis, we study different optimization approaches for resource allocation problem of ATO preparation and analyze their performance. We proposed a genetic algorithm formulation with customized encoding, crossover and fitness calculation mechanisms by using the domain knowledge. A linear programming formulation of the problem is developed by integer decision variables and it is used for effectivity and efficiency analysis of genetic algorithm formulations.
APA, Harvard, Vancouver, ISO, and other styles
5

Salazar, Javier. "Resource allocation optimization algorithms for infrastructure as a service in cloud computing." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB074.

Full text
Abstract:
L’informatique, le stockage des données et les applications à la demande font partie des services offerts par l’architecture informatique en Nuage. Dans ce cadre, les fournisseurs de nuage (FN) agissent non seulement en tant qu’administrateurs des ressources d'infrastructure mais ils profitent aussi financièrement de la location de ces ressources. Dans cette thèse, nous proposons trois modèles d'optimisation du processus d'allocation des ressources dans le nuage dans le but de réduire les coûts générés et d’accroitre la qualité du service rendu. Cela peut être accompli en fournissant au FN les outils formels nécessaires pour réduire au minimum le prix des ressources dédiées à servir les requêtes des utilisateurs. Ainsi, la mise en œuvre des modèles proposés permettra non seulement l’augmentation des revenus du FN, mais aussi l’amélioration de la qualité des services offerts, ce qui enrichira l’ensemble des interactions qui se produisent dans le nuage. A cet effet, nous nous concentrons principalement sur les ressources de l’infrastructure en tant que service (IaaS), lesquels sont contenus dans des centres de données (DCs), et constituent l'infrastructure physique du nuage. Comme une alternative aux immenses DCs centralisés, la recherche dans ce domaine comprend l’installation de petits centres de données (Edge DCs) placés à proximité des utilisateurs finaux. Dans ce contexte nous adressons le problème d’allocation des ressources et pour ce faire nous utilisons la technique d'optimisation nommée génération de colonnes. Cette technique nous permet de traiter des modèles d'optimisation à grande échelle de manière efficace. La formulation proposée comprend à la fois, et dans une seule phase, les communications et les ressources informatiques à optimiser dans le but de servir les requêtes de service d'infrastructure. Sur la base de cette formulation, nous proposons également un deuxième modèle qui comprend des garanties de qualité de service toujours sous la même perspective d'allocation des ressources d’infrastructure en tant que service. Ceci nous permet de fournir plusieurs solutions applicables à divers aspects du même problème, tels que le coût et la réduction des délais, tout en offrant différents niveaux de service. En outre, nous introduisons le scénario informatique en nuage multimédia, qui, conjointement avec l'architecture des Edge DCs, résulte en l'architecture Multimédia Edge Cloud (MEC). Dans ce cadre, nous proposons une nouvelle approche pour l'allocation des ressources dans les architectures informatique en nuage multimédia lors du positionnement de ces DCs afin de réduire les problèmes liés à la communication, tels que la latence et la gigue. Dans cette formulation, nous proposons également de mettre en œuvre des technologies optiques de réseau de fibres pour améliorer les communications entre les DCs. Plusieurs travaux ont proposé de nouvelles méthodes pour améliorer la performance et la transmission de données. Dans nos travaux, nous avons décidé de mettre en œuvre le multiplexage en longueur d'onde (WDM) pour renforcer l'utilisation des liens et les chemins optiques dans le but de grouper différents signaux sur la même longueur d'onde. Un environnement de simulation réel est également présenté pour l’évaluation des performances et de l'efficacité des approches proposées. Pour ce faire, nous utilisons le scénario spécifié pour les DCs, et nous comparons par simulation nos modèles au moyen de différents critères de performances tel que l'impact de la formulation optique sur la performance du réseau. Les résultats numériques obtenus ont montré que, en utilisant nos modèles, le FN peut efficacement réduire les coûts d'allocation en maintenant toujours un niveau satisfaisant quant à l'acceptation de requêtes et la qualité du service
The cloud architecture offers on-demand computing, storage and applications. Within this structure, Cloud Providers (CPs) not only administer infrastructure resources but also directly benefit from leasing them. In this thesis, we propose three optimization models to assist CPs reduce the costs incurred in the resource allocation process when serving users’ demands. Implementing the proposed models will not only increase the CP’s revenue but will also enhance the quality of the services offered, benefiting all parties. We focus on Infrastructure as a Service (IaaS) resources which constitute the physical infrastructure of the cloud and are contained in datacenters (DCs). Following existing research in DC design and cloud computing applications, we propose the implementation of smaller DCs (Edge DCs) be located close to end users as an alternative to large centralized DCs. Lastly, we use the Column Generation optimization technique to handle large scale optimization models efficiently. The proposed formulation optimizes both the communications and information technology resources in a single phase to serve IaaS requests. Based on this formulation, we also propose a second model that includes QoS guarantees under the same Infrastructure as a Service resource allocation perspective, to provide different solutions to diverse aspects of the resource allocation problem such as cost and delay reduction while providing different levels of service. Additionally, we consider the multimedia cloud computing scenario. When Edge DCs architecture is applied to this scenario it results in the creation of the Multimedia Edge Cloud (MEC) architecture. In this context we propose a resource allocation approach to help with the placement of these DCs to reduce communication related problems such as jitter and latency. We also propose the implementation of optical fiber network technologies to enhance communication between DCs. Several studies can be found proposing new methods to improve data transmission and performance. For this study, we decided to implement Wavelength Division Multiplexing (WDM) to strengthen the link usage and light-paths and, by doing so, group different signals over the same wavelength. Using a realistic simulation environment, we evaluate the efficiency of the approaches proposed in this thesis using a scenario specifically designed for the DCs, comparing them with different benchmarks and also simulating the effect of the optical formulation on the network performance. The numerical results obtained show that by using the proposed models, a CP can efficiently reduce allocation costs while maintaining satisfactory request acceptance and QoS ratios
APA, Harvard, Vancouver, ISO, and other styles
6

Ali, Syed Hussain. "Cross layer scheduling and resource allocation algorithms for cellular wireless networks." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2722.

Full text
Abstract:
This thesis considers the problem of cross layer scheduling and radio resource allocation of multiple users in the downlink of time-slotted and frequency-slotted cellular data networks. For these networks, opportunistic scheduling algorithms improve system performance by exploiting time variations of the radio channel. Within the broader framework of opportunistic scheduling, this thesis solves three distinct problems and proposes efficient and scalable solutions for them. First, we present novel optimal and approximate opportunistic scheduling algorithms that combine channel fluctuation and user mobility information in their decision rules. The algorithms propose the use of dynamic fairness constraints. These fairness constraints adapt according to the user mobility. The optimal algorithm is an off-line algorithm that precomputes constraint values according to a known mobility model. The approximate algorithm is an on-line algorithm that relies on the future prediction of the user mobility locations in time. We show that the use of mobility information increases channel capacity. We also provide analytical bounds on the performance of the approximate algorithm. Second, this thesis presents a new opportunistic scheduling solution that maximizes the aggregate user performance subject to certain minimum and maximum performance constraints. By constraining the performance experienced by individual users, who share a common radio downlink, to some upper bounds, it is possible to provide the system operator with a better control of radio resource allocations and service differentiation among different classes of users. The proposed solution offers better performance than existing solution under practical channel conditions. Finally, we present a dynamic subcarrier allocation solution for fractional frequency reuse in multicell orthogonal frequency division multiple access systems. We formulate the subcarrier allocation as an equivalent set partitioning problem and then propose an efficient hierarchical solution which first partitions subcarriers into groups and next schedules subcarriers opportunistically to users. Simulation results for three solutions illustrate the usefulness of the proposed schemes.
APA, Harvard, Vancouver, ISO, and other styles
7

Shashika, Manosha Kapuruhamy Badalge (). "Convex optimization based resource allocation in multi-antenna systems." Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526217499.

Full text
Abstract:
Abstract The use of multiple antennas is a fundamental requirement in future wireless networks as it helps to increase the reliability and spectral efficiency of mobile radio links. In this thesis, we study convex optimization based radio resource allocation methods for the downlink of multi-antenna systems. First, the problem of admission control in the downlink of a multicell multiple-input single-output (MISO) system has been considered. The objective is to maximize the number of admitted users subject to a signal-to-interference-plus-noise ratio (SINR) constraint at each admitted user and a transmit power constraint at each base station (BS). We have cast the admission control problem as an ℓ0 minimization problem; it is known to be combinatorial, NP-hard. Centralized and distributed algorithms to solve this problem have been proposed. To develop the centralized algorithm, we have used sequential convex programming (SCP). The distributed algorithm has been derived by using the consensus-based alternating direction method of multipliers in conjunction with SCP. We have shown numerically that the proposed admission control algorithms achieve a near-to-optimal performance. Next, we have extended the admission control problem to provide fairness, where long-term fairness among the users has been guaranteed. We have focused on proportional and max-min fairness, and proposed dynamic control algorithms via Lyapunov optimization. Results show that these proposed algorithms guarantee fairness. Then, the problem of admission control for the downlink of a MISO heterogeneous networks (hetnet) has been considered, and the proposed centralized and distributed algorithms have been adapted to find a solution. Numerically, we have illustrated that the centralized algorithm achieves a near-to-optimal performance, and the distributed algorithm’s performance is closer to the optimal value. Finally, an algorithm to obtain the set of all achievable power-rate tuples for a multiple-input multiple-output hetnet has been provided. The setup consists of a single macrocell and a set of femtocells. The interference power to the macro users from the femto BSs has been kept below a threshold. To find the set of all achievable power-rate tuples, a two-dimensional vector optimization problem is formulated, where we have considered maximizing the sum-rate while minimizing the sum-power, subject to maximum power and interference threshold constraints. This problem is known to be NP-hard. A solution method is provided by using the relationship between the weighted sum-rate maximization and weighted-sum-mean-squared-error minimization problems. The proposed algorithm was used to evaluate the impact of imposing interference threshold constraints and the co-channel deployments in a hetnet
Tiivistelmä Monen antennin käyttö on perusvaatimus tulevissa langattomissa verkoissa, koska se auttaa lisäämään matkaviestinyhteyksien luotettavuutta ja spektritehokkuutta. Tässä väitöskirjassa tutkitaan konveksiin optimointiin perustuvia radioresurssien allokointimenetelmiä moniantennijärjestelmien alalinkin suunnassa. Ensiksi on käsitelty pääsynvalvonnan ongelmaa alalinkin suuntaan monen solun moni-tulo yksi-lähtö (MISO) -verkoissa. Tavoitteena on maksimoida hyväksyttyjen käyttäjien määrä, kun hyväksytyille käyttäjille on asetettu signaali-häiriö-kohinasuhteen (SINR) rajoitus, ja tukiasemille lähetystehon rajoitus. Pääsynvalvonnan ongelma on muotoiltu ℓ0-minimointiongelmana, jonka tiedetään olevan kombinatorinen, NP-vaikea ongelma. Ongelman ratkaisemiseksi on ehdotettu keskitettyjä ja hajautettuja algoritmeja. Keskitetty optimointialgoritmi perustuu sekventiaaliseen konveksiin optimointiin. Hajautettu algoritmi pohjautuu konsensusoptimointimenetelmään ja sekventiaaliseen konveksiin optimointiin. Ehdotettujen pääsynvalvonta-algoritmien on numeerisesti osoitettu saavuttavan lähes optimaalinen suorituskyky. Lisäksi pääsynvalvontaongelma on laajennettu takaamaan pitkän aikavälin oikeudenmukaisuus käyttäjien välillä. Työssä käytetään erilaisia määritelmiä oikeudenmukaisuuden takaamiseen, ja ehdotetaan dynaamisia algoritmeja pohjautuen Lyapunov-optimointiin. Tulokset osoittavat, että ehdotetuilla algoritmeilla taataan käyttäjien välinen oikeudenmukaisuus. Tämän jälkeen käsitellään heterogeenisen langattoman MISO-verkon pääsynvalvonnan ongelmaa. Edellä ehdotettuja keskitettyjä ja hajautettuja algoritmeja on muokattu tämän ongelman ratkaisemiseksi. Työssä osoitetaan numeerisesti, että sekä keskitetyllä että hajautetulla algoritmilla saavutetaan lähes optimaalinen suorituskyky. Lopuksi on laadittu algoritmi, jolla löydetään kaikki saavutettavissa olevat teho-datanopeusparit heterogeenisessä langattomassa moni-tulo moni-lähtö (MIMO) -verkossa. Verkko koostuu yhdestä makrosolusta ja useasta piensolusta. Piensolutukiasemista makrokäyttäjiin kohdistuvan häiriön teho on pidetty tietyn rajan alapuolella. Kaikkien saavutettavien teho-datanopeusparien löytämiseksi on laadittu kaksiulotteinen vektorioptimointiongelma, jossa maksimoidaan summadatanopeus pyrkien minimoimaan kokonaisteho, kun enimmäisteholle ja häiriökynnykselle on asetettu rajoitukset. Tämän ongelman tiedetään olevan NP-vaikea. Ongelman ratkaisemiseksi käytetään painotetun summadatanopeuden maksimointiongelman, ja painotetun keskineliövirheen minimointiongelman välistä suhdetta. Ehdotettua algoritmia käytettiin arvioimaan häiriörajoitusten ja saman kanavan käyttöönoton vaikutusta heterogeenisessä langattomassa verkossa
APA, Harvard, Vancouver, ISO, and other styles
8

Mharsi, Niezi. "Cloud-Radio Access Networks : design, optimization and algorithms." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT043/document.

Full text
Abstract:
Cloud-Radio Access Network (C-RAN) est une architecture prometteuse pour faire face à l’augmentation exponentielle des demandes de trafic de données et surmonter les défis des réseaux de prochaine génération (5G). Le principe de base de CRAN consiste à diviser la station de base traditionnelle en deux entités : les unités de bande de base (BaseBand Unit, BBU) et les têtes radio distantes (Remote Radio Head, RRH) et à mettre en commun les BBUs de plusieurs stations dans des centres de données centralisés (pools de BBU). Ceci permet la réduction des coûts d’exploitation, l’amélioration de la capacité du réseau ainsi que des gains en termes d’utilisation des ressources. Pour atteindre ces objectifs, les opérateurs réseaux ont besoin d’investiguer de nouveaux algorithmes pour les problèmes d’allocation de ressources permettant ainsi de faciliter le déploiement de l’architecture C-RAN. La plupart de ces problèmes sont très complexes et donc très difficiles à résoudre. Par conséquent, nous utilisons l’optimisation combinatoire qui propose des outils puissants pour adresser ce type des problèmes.Un des principaux enjeux pour permettre le déploiement du C-RAN est de déterminer une affectation optimale des RRHs (antennes) aux centres de données centralisés (BBUs) en optimisant conjointement la latence sur le réseau de transmission fronthaul et la consommation des ressources. Nous modélisons ce problème à l’aide d’une formulation mathématique basée sur une approche de programmation linéaire en nombres entiers permettant de déterminer les stratégies optimales pour le problème d’affectation des ressources entre RRH-BBU et nous proposons également des heuristiques afin de pallier la difficulté au sens de la complexité algorithmique quand des instances larges du problème sont traitées, permettant ainsi le passage à l’échelle. Une affectation optimale des antennes aux BBUs réduit la latence de communication attendue et offre des gains en termes d’utilisation des ressources. Néanmoins, ces gains dépendent fortement de l’augmentation des niveaux d’interférence inter-cellulaire causés par la densité élevée des antennes déployées dans les réseaux C-RANs. Ainsi, nous proposons une formulation mathématique exacte basée sur les méthodes Branch-and-Cut qui consiste à consolider et ré-optimiser les rayons de couverture des antennes afin de minimiser les interférences inter-cellulaires et de garantir une couverture maximale du réseau conjointement. En plus de l’augmentation des niveaux d’interférence, la densité élevée des cellules dans le réseau CRAN augmente le nombre des fonctions BBUs ainsi que le trafic de données entre les antennes et les centres de données centralisés avec de fortes exigences en termes de latence sur le réseau fronthaul. Par conséquent, nous discutons dans la troisième partie de cette thèse comment placer d’une manière optimale les fonctions BBUs en considérant la solution split du 3GPP afin de trouver le meilleur compromis entre les avantages de la centralisation dans C-RAN et les forts besoins en latence et bande passante sur le réseau fronthaul. Nous proposons des algorithmes (exacts et heuristiques) issus de l’optimisation combinatoire afin de trouver rapidement des solutions optimales ou proches de l’optimum, même pour des instances larges du problèmes
Cloud Radio Access Network (C-RAN) has been proposed as a promising architecture to meet the exponential growth in data traffic demands and to overcome the challenges of next generation mobile networks (5G). The main concept of C-RAN is to decouple the BaseBand Units (BBU) and the Remote Radio Heads (RRH), and place the BBUs in common edge data centers (BBU pools) for centralized processing. This gives a number of benefits in terms of cost savings, network capacity improvement and resource utilization gains. However, network operators need to investigate scalable and cost-efficient algorithms for resource allocation problems to enable and facilitate the deployment of C-RAN architecture. Most of these problems are very complex and thus very hard to solve. Hence, we use combinatorial optimization which provides powerful tools to efficiently address these problems.One of the key issues in the deployment of C-RAN is finding the optimal assignment of RRHs (or antennas) to edge data centers (BBUs) when jointly optimizing the fronthaul latency and resource consumption. We model this problem by a mathematical formulation based on an Integer Linear Programming (ILP) approach to provide the optimal strategies for the RRH-BBU assignment problem and we propose also low-complexity heuristic algorithms to rapidly reach good solutions for large problem instances. The optimal RRH-BBU assignment reduces the expected latency and offers resource utilization gains. Such gains can only be achieved when reducing the inter-cell interference caused by the dense deployment of cell sites. We propose an exact mathematical formulation based on Branch-and-Cut methods that enables to consolidate and re-optimize the antennas radii in order to jointly minimize inter-cell interference and guarantee a full network coverage in C-RAN. In addition to the increase of inter-cell interference, the high density of cells in C-RAN increases the amount of baseband processing as well as the amount of data traffic demands between antennas and centralized data centers when strong latency requirements on fronthaul network should be met. Therefore, we discuss in the third part of this thesis how to determine the optimal placement of BBU functions when considering 3GPP split option to find optimal tradeoffs between benefits of centralization in C-RAN and transport requirements. We propose exact and heuristic algorithms based on combinatorial optimization techniques to rapidly provide optimal or near-optimal solutions even for large network sizes
APA, Harvard, Vancouver, ISO, and other styles
9

Lenharth, Andrew D. "Algorithms for stable allocations in distributed real-time resource management systems." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1102697777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Morimoto, Naoyuki. "Design and Analysis of Algorithms for Graph Exploration and Resource Allocation Problems and Their Application to Energy Management." 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/189687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hasan, Cengis. "Optimization of resource allocation in small cells networks : A green networking approach." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-01015735.

Full text
Abstract:
The term "green networking" refers to energy-efficient networking technologies and products, while minimizing resource usage as possible. This thesis targets the problem of resource allocation in small cells networks in a green networking context. We develop algorithms for different paradigms. We exploit the framework of coalitional games theory and some stochastic geometric tools as well as the crowding game model. We first study the mobile assignment problem in broadcast transmission where minimal total power consumption is sought. A green-aware approach is followed in our algorithms. We examine the coalitional game aspects of the mobile assignment problem. This game has an incentive to form grand coalition where all players join to the game. By using Bondareva-Shapley theorem, we prove that this coalitional game has a non-empty core which means that the grand coalition is stable. Then, we examine the cost allocation policy for different methods. In a second part, we analyze a significant problem in green networking called switching off base stations in case of cooperating service providers by means of stochastic geometric and coalitional game tools. The coalitional game herein considered is played by service providers who cooperate in switching off base stations. We observed the Nash stability which is a concept in hedonic coalition formation games. We ask the following question: Is there any utility allocation method which could result in a Nash-stable partition? We address this issue in the thesis. We propose the definition of the Nash-stable core which is the set of all possible utility allocation methods resulting in stable partitions obtained according to Nash stability. We finally consider games related to the association of mobiles to an access point. The player is the mobile which has to decide to which access point to connect. We consider the choice between two access points or more, where the access decisions may depend on the number of mobiles connected to each access points. We obtained new results using elementary tools from congestion and crowding games. Last but not least, we extend our work to cooperative transmissions. We formulate the partner selection problem in cooperative relaying based on a matching theoretic approach. Partner selection is described as a special stable roommate problem where each player ranks its partners by some criterion. We adapted Irving's algorithm for determining the partner of each player. We introduced a decentralized version of the Irving's algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Alinia, Bahram. "Optimal resource allocation strategies for electric vehicles in smart grids." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0012/document.

Full text
Abstract:
Avec les préoccupations environnementales croissantes liées aux émissions de carbone et la chute rapide des prix des batteries, la part de marché des véhicules électriques (EV) augmente rapidement. Le nombre croissant de EV ainsi que les progrès sans précédent dans la capacité de la batterie et de la technologie entraîne une augmentation drastique de la demande totale d'énergie destinée aux véhicules électriques. Cette forte demande de charge rend complexe le problème de planification de la charge. Même en prenant avantage de la propriété reportable des demandes de charge et d'une planification adéquate, la demande globale pourrait dépasser le taux de charge tolérable des stations, étant donné les contraintes physiques des dispositifs de charge et des transformateurs. Le principal défi est la nécessité de concevoir des solutions en ligne puisque, dans la pratique, l'ordonnanceur ne dispose d'aucune information sur les arrivées futures d'EV. Cette thèse étudie le problème d'ordonnancement des EV en ligne et fournit trois contributions principales. Premièrement, nous démontrons que le problème classique de la programmation en ligne des tâches sensibles aux échéances avec des valeurs partielles est similaire au problème d'ordonnancement EV et étudions l'extension de la programmation des charges EV en prenant en compte de la limite de traitement des travaux. Le problème réside dans la catégorie des problèmes d'ordonnancement en ligne couplés dans le temps sans disponibilité d'informations futures. Le premier algorithme proposé est déterministe, tandis que le second est randomisé et bénéficie d'une complexité de calcul plus faible. Deuxièmement, nous formulons un problème de maximisation du bien-être social pour la planification de la charge des EV avec une contrainte de capacité de charge. Nous avons conçu des algorithmes d'ordonnancement de charge qui non seulement fonctionnent dans un scénario en ligne, mais aussi qui répondent aux deux principaux défis suivants : (i) fournir un engagement à l'arrivée ; (ii) garantir la résistance aux stratégies (de groupe). Des simulations approfondies utilisant des traces réelles démontrent l'efficacité de nos algorithmes d'ordonnancement en ligne par rapport à la solution hors-ligne optimale non-engagée. La troisième contribution concerne la planification en ligne des véhicules électriques dans un réseau de recharge adaptatif (ACN) avec des contraintes de pics locaux et globaux. Nous avons conçu un algorithme d'ordonnancement primal-dual de faible complexité qui atteint un rapport d'approximation borné. Des résultats expérimentaux détaillés basés sur des traces montrent que les performances des algorithmes en ligne proposés sont proches de l'optimum hors ligne et surpassent les solutions existantes
With the increased environmental concerns related to carbon emission, and rapid drop in battery prices (e.g., 35% drop in 2017), the market share of Electric Vehicles (EVs) is rapidly growing. The growing number of EVs along with the unprecedented advances in battery capacity and technology results in drastic increase in the total energy demand of EVs. This large charging demand makes the EV charging scheduling problem challenging. The critical challenge is the need for online solution design since in practical scenario the scheduler has no information of future arrivals of EVs in a time-coupled underlying problem. This thesis studies online EV scheduling problem and provides three main contributions. First, we demonstrate that the classical problem of online scheduling of deadlinesensitive jobs with partial values is similar to the EV scheduling problem and study the extension to EV charging scheduling by taking into account the processing rate limit of jobs as an additional constraint to the original problem. The problem lies in the category of time-coupled online scheduling problems without availability of future information. Using competitive ratio, as a well-established performance metric, two online algorithms, both of which are shown to be (2 − 1/U)-competitive are proposed, where U is the maximum scarcity level, a parameter that indicates demand-to-supply ratio. Second, we formulate a social welfare maximization problem for EV charging scheduling with charging capacity constraint. We devise charging scheduling algorithms that not only work in online scenario, but also they address the following two key challenges: (i) to provide on-arrival commitment; respecting the capacity constraint may hinder fulfilling charging requirement of deadline-constrained EVs entirely. Therefore, committing a guaranteed charging amount upon arrival of each EV is highly required; (ii) to guarantee (group)-strategy-proofness as a salient feature to promote EVs to reveal their true type and do not collude with other EVs. Third, we tackle online scheduling of EVs in an adaptive charging network (ACN) with local and global peak constraints. Two alternatives in resource-limited scenarios are to maximize the social welfare by partially charging the EVs (fractional model) or selecting a subset of EVs and fully charge them (integral model). For the fractional model, both offline and online algorithms are devised. We prove that the offline algorithm is optimal. We prove the online algorithm achieves a competitive ratio of 2. The integral model, however, is more challenging since the underlying problem is NP-hard due to 0/1 selection criteria of EVs. Hence, efficient solution design is challenging even in offline setting. We devise a low-complexity primal-dual scheduling algorithm that achieves a bounded approximation ratio. Built upon the offline approximate algorithm, we propose an online algorithm and analyze its competitive ratio in special cases
APA, Harvard, Vancouver, ISO, and other styles
13

Lunday, Brian Joseph. "Resource Allocation on Networks: Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/77323.

Full text
Abstract:
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Houidi, Omar. "Algorithms for Virtual Network Functions chaining." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS005.

Full text
Abstract:
Cette thèse traite du placement optimal et heuristique de fonctions réseau et de chaînes de fonction réseau dans des infrastructures cloud et réseau virtualisées. L'émergence de la virtualisation des fonctions réseau, connu sous l'acronyme NFV pour Network Function Virtualization, permet de découpler les fonctions réseau en mode logiciel du matériel d'hébergement et de s'appuyer sur des serveurs génériques et d'éviter l'usage de, et la dépendance à, des matériels dédiés voire propriétaires.Le placement de fonctions réseau virtualisées (représentées par VNF, Virtualized Network Functions) est NP-Difficile puisqu'il s'agit de projeter un petit graphe de ressources virtuelles sur un graphe plus grand (graphe de l'infrastructure d'hébergement). Les solutions optimales, en particulier la programmation linéaire en nombre entier (ILP), ne passent pas à l'échelle. Sachant que la demande est dynamique et peut varier dans le temps et que le réseau est lui même variable dans le temps, il est important de prévoir des adaptations des placements. Cela peut s'effectuer par de l'élasticité sur les ressources d'hébergement réservées à une fonction réseau virtualisée, à un graphe de service réseau et par une extension du graphe de service lui même en fonction du contexte et des exigences des utilisateurs ou tenants.La thèse propose une famille d'algorithmes pour le placement de chaînes de services (ou fonctions) réseau avec la possibilité d'étendre les ressources d'hébergement des VNFs (c'est à dire assurer l'élasticité du service d'hébergement en augmentant les ressources allouées ou en générant plusieurs instances de VNFs pour écouler le trafic et répondre à la demande) en plus du placement initial. Une solution en programmation en nombre entier est élaborée et aussi utilisée comme référence pour une comparaison avec l'état de l'art et avec les extensions proposées. L'optimisation étant effectuée en instantanée au fur à mesure de l'arrivée des demandes, une à la fois, une solution qui regroupe plusieurs demandes pour y répondre simultanément, en élaborant un graphe composite, permet d'améliorer les performances. Cette approche connue sous le nom de "batch" n'améliore que partiellement la performance, la récompense sur le long terme (en efficacité, minimisation des ressources consommées, et en équilibrage de charge) est nécessairement limitée.La thèse s'est penchée aussi sur l'extension de graphes de services réseau, de tenants, déjà déployés, en adoptant une approche de type arbre de recouvrement, Spanning Tree. Plus spécifiquement une modélisation du problème en un "Steiner Tree Problem" a conduit à des performances proches de l'optimal pour des extensions de graphes au fil des demandes, en les traitant séparément. Les travaux de thèse sont par la suite revenus sur la rentabilité sur le long terme des algorithmes, en approchant le placement de chaînes de services et fonctions réseau comme un objectif long terme via de l'apprentissage par renforcement en se souciant plus de la rentabilité et de l'efficacité long terme des algorithmes contrairement aux approches visant exclusivement l'optimalité instantanée à chaque nouvelle demande.Cette thèse a permis de faire avancer autant que faire se peut l'état de l'art du placement optimal dans les infrastructures cloud et réseau partagées. Notamment, le problème, et besoin, d'extension de graphes, de tenants, déjà déployés, sans perturber l'hébergement initial, a reçu peu d'attention. Pourtant ce besoin est essentiel pour répondre à des déploiements additionnels de nouvelles fonctions et services réseau, et pour réagir aux dégradations et à l'accroissement de la demande, aux attaques et pour assurer l'introduction de nouveaux services et fonctions de sécurité autour du graphe initial. L'approche peut aussi répondre à des modifications de graphes et d'isolations d'une partie (défectueuse ou compromise) d'un graphe et de son remplacement par d'autres services et graphes fiables et non altérés
Network Function Virtualization (NFV) is an innovative emerging concept that decouples network functions (such as firewalls, DNS, NATs, load balancers, etc.) from dedicated hardware devices (the traditional expensive middleboxes). This decoupling enables hosting of network services, known as Virtualized Network Functions (VNFs), on commodity hardware (such as switches or servers) and thus facilitates and accelerates service deployment and management by providers, improves flexibility, leads to efficient and scalable resource usage, and reduces costs. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges.One of the main technical challenges in this domain is the optimal placement of the VNFs within the hosting infrastructures. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. The VNF Placement and Chaining Problem is NP-Hard and there is a need for placement approaches that can scale with problem size and find good solutions in acceptable times. The overarching goal of this thesis is to enable dynamic virtual network resources provisioning to deal with demand fluctuation during the virtual network lifetime, and to enhance the substrate resource usage. Reserving a fixed amount of resources is inefficient to satisfy the VNF resource requirements. To cope with these problems, we propose dynamic resource management strategies.In this thesis, both exact and heuristic algorithms are designed and evaluated in terms of optimality, complexity, ability to scale, and compared with the state of the art. Elastic mechanisms and scaling algorithms are first presented to improve adaptation and deployment of virtualized network functions in NFV infrastructures to support increasing demand while increasing provider's revenue. Since network providers not only need to control, classify and steer user and application traffic flows in their dedicated slices but also want to extend their already acquired and operational virtual networks or slices with additional service graphs, the thesis proposes extension algorithms of already hosted network functions graphs without disrupting initially deployed and active service instances. The proposed algorithms extend already deployed network services and functions graphs to respond to new demands while taking into account the constraint of minimizing the impact on the original service graphs. The extension algorithms are particularly useful and suitable for situations where already deployed graphs need to be enhanced with new features and properties (adding new functions) and modified to react to degradation and attacks such as removing a fraction of the graph and replacing with new complex and composed functions into more capable and uncompromised graphs.The thesis also addresses the VNF placement and chaining problem in an online and in a batch mode to improve performance in terms of longer time reward. An enhanced Reinforcement Learning-based approach is also proposed to improve the long term reward beyond what the previous methods can achieve. This is analyzed and realized for a load balancing objective but can be adjusted for other criteria
APA, Harvard, Vancouver, ISO, and other styles
15

Syed, Muhammad Fahad. "Various resource allocation and optimization strategies for high bit rate communications on power lines." Phd thesis, INSA de Rennes, 2010. http://tel.archives-ouvertes.fr/tel-00480261.

Full text
Abstract:
Ces dernières années, le développement des réseaux de communication indoor et outdoor et l'augmentation du nombre d'applications conduisent à un besoin toujours croissant de transmission de données à haut débit. Parmi les nombreuses technologies concurrentes, les communications par courant porteur en ligne (CPL) ont leur place en raison des infrastructures déjà disponibles. La motivation principale de cette thèse est d'augmenter le débit et la robustesse des systèmes CPL à porteuses multiples afin qu'ils puissent être utilisés efficacement dans les réseaux domestiques et pour la domotique. Le thème de ce travail de recherche est d'explorer différentes approches de modulation et de codage de canal en liaison avec plusieurs schémas d'allocation et d'optimisation des ressources. L'objectif est ici d'améliorer les capacités des CPL et d'être concurrent face aux autres solutions de communication à haut débit et de faire face efficacement aux inconvénients inhérents au réseau d'alimentation. Un certain nombre de stratégies d'allocation des ressources et d'optimisation sont étudiées pour améliorer les performances globales des systèmes CPL. La performance d'un système de communication est généralement mesurée en termes de débit, de marge de bruit et de taux d'erreur binaire (TEB) de la liaison. La maximisation de débit (RM) est étudiée pour les systèmes OFDM (en anglais orthogonal frequency division multiplexing) et LP-OFDM (en anglais linear precoded OFDM) sous la contrainte de densité spectrale de puissance (DSP). Deux contraintes différentes de taux d'erreur ont été appliquées au problème RM. La première contrainte est la contrainte de TEB crête où toutes les sous-porteuses ou séquences de précodage doivent respecter le TEB cible. Avec la deuxième contrainte, contrainte de TEB moyen, différentes sous-porteuses ou séquences de précodage sont affectées par des valeurs différentes de TEB et une contrainte de TEB moyen est imposée sur le symbole complet OFDM ou LP-OFDM. Les algorithmes d'allocation sont également proposés en prenant en compte les gains de codage de canal dans le processus d'allocation des ressources. En outre, un nouveau schéma de minimisation de TEB moyen est introduit qui minimise le TEB moyen de systèmes pour un débit donné et un masque imposé de DSP. Pour l'allocation des ressources dans un système à porteuses multiples, il est généralement supposé que l'état du canal (CSI) est parfaitement connu par l'émetteur. En réalité, les informations de CSI disponibles au point d'émission sont imparfaites. Aussi, nous avons également étudié des schémas d'allocation des ressources dans le cas de systèmes OFDM et LP-OFDM en prenant compte, et de manière efficace, les impacts des estimations bruitées. Plusieurs chaînes de communication sont aussi développées pour les systèmes OFDM et LP-OFDM.
APA, Harvard, Vancouver, ISO, and other styles
16

Indrakanti, Saratchandra. "Computational Methods for Vulnerability Analysis and Resource Allocation in Public Health Emergencies." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804902/.

Full text
Abstract:
POD (Point of Dispensing)-based emergency response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable sub-populations, resulting in access disparities during emergency response. Federal authorities emphasize on the need to identify sub-populations that cannot avail regular services during an emergency due to their special needs to ensure effective response. Vulnerable individuals require the targeted allocation of appropriate resources to serve their special needs. Devising schemes to address the needs of vulnerable sub-populations is essential for the effectiveness of response plans. This research focuses on data-driven computational methods to quantify and address vulnerabilities in response plans that require the allocation of targeted resources. Data-driven methods to identify and quantify vulnerabilities in response plans are developed as part of this research. Addressing vulnerabilities requires the targeted allocation of appropriate resources to PODs. The problem of resource allocation to PODs during public health emergencies is introduced and the variants of the resource allocation problem such as the spatial allocation, spatio-temporal allocation and optimal resource subset variants are formulated. Generating optimal resource allocation and scheduling solutions can be computationally hard problems. The application of metaheuristic techniques to find near-optimal solutions to the resource allocation problem in response plans is investigated. A vulnerability analysis and resource allocation framework that facilitates the demographic analysis of population data in the context of response plans, and the optimal allocation of resources with respect to the analysis are described.
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Qingyu. "Delay-Aware Multi-Path Routing in a Multi-Hop Network: Algorithms and Applications." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/90405.

Full text
Abstract:
Delay is known to be a critical performance metric for various real-world routing applications including multimedia communication and freight delivery. Provisioning delay-minimal (or at least delay-bounded) routing services for all traffic of an application is highly important. As a basic paradigm of networking, multi-path routing has been proven to be able to obtain lower delay performance than the single-path routing, since traffic congestions can be avoided. However, to our best knowledge, (i) many of existing delay-aware multi-path routing studies only consider the aggregate traffic delay. Considering that even the solution achieving the optimal aggregate traffic delay has a possibly unbounded delay performance for certain individual traffic unit, those studies may be insufficient in practice; besides, (ii) most existing studies which optimize or bound delays of all traffic are best-effort, where the achieved solutions have no theoretical performance guarantee. In this dissertation, we study four delay-aware multi-path routing problems, with the delay performances of all traffic taken into account. Three of them are in communication and one of them is in transportation. Note that our study differ from all related ones as we are the first to study the four fundamental problems to our best knowledge. Although we prove that our studied problems are all NP-hard, we design approximation algorithms with theoretical performance guarantee for solving each of them. To be specific, we claim the following contributions. Minimize maximum delay and average delay. First, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to stream a flow at a fixed rate to a receiver. Two important delay metrics are the average sender-to-receiver delay and the maximum sender-to-receiver delay. Existing results say that the two delay metrics of a flow cannot be both within bounded-ratio gaps to the optimal. In comparison, we design three different flow solutions, each of which can minimize the two delay metrics simultaneously within a $(1/epsilon)$-ratio gap to the optimal, at a cost of only delivering $(1-epsilon)$-fraction of the flow, for any user-defined $epsilonin(0,1)$. The gap $(1/epsilon)$ is proven to be at least near-tight, and we further show that our solutions can be extended to the multiple-unicast setting. Minimize Age-of-Information (AoI). Second, we consider a single-unicast setting where in a multi-hop network a sender requires to use multiple paths to periodically send a batch of data to a receiver. We study a newly proposed delay-sensitive networking performance metric, AoI, defined as the elapsed time since the generation of the last received data. We consider the problem of minimizing AoI subject to throughput requirements, which we prove is NP-hard. We note that our AoI problem differs from existing ones in that we are the first to consider the batch generation of data and multi-path communication. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Our framework can build upon any polynomial-time $alpha$-approximation algorithm of the maximum delay minimization problem, to construct an $(alpha+c)$-approximate solution for minimizing AoI. Here $c$ is a constant dependent on throughput requirements. Maximize network utility. Third, we consider a multiple-unicast setting where in a multi-hop network there exist many network users. Each user requires a sender to use multiple paths to stream a flow to a receiver, incurring an utility that is a function of the experienced maximum delay or the achieved throughput. Our objective is to maximize the aggregate utility of all users under throughput requirements and maximum delay constraints. We observe that it is NP-complete either to construct an optimal solution under relaxed maximum delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Hence it is non-trivial even to obtain approximate solutions satisfying relaxed constraints in a polynomial time. We develop a polynomial-time approximation algorithm. Our algorithm obtains solutions with constant approximation ratios under realistic conditions, at the cost of violating constraints by up to constant-ratios. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a common truck operation scenario where a truck is driving in a national highway network to fulfill multiple transportation tasks in order. We study an NP-hard timely eco-routing problem of minimizing total fuel consumption under task pickup and delivery time window constraints. We note that optimizing task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design a fast and efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its optimality gap in case the conditions are not met. We simulate a heavy-duty truck driving across the US national highway system, and empirically observe that the fuel consumption achieved by our heuristic can be $22%$ less than that achieved by the fastest-/shortest- path baselines. Furthermore, the fuel saving of our heuristic as compared to the baselines is robust to the number of tasks.
Doctor of Philosophy
We consider a network modeled as a directed graph, where it takes time for data to traverse each link in the network. It models many critical applications both in the communication area and in the transportation field. For example, both the European education network and the US national highway network can be modeled as directed graphs. We consider a scenario where a source node is required to send multiple (a set of) data packets to a destination node through the network as fast as possible, possibly using multiple source-to-destination paths. In this dissertation we study four problems all of which try to figure out routing solutions to send the set of data packets, with an objective of minimizing experienced travel time or subject to travel time constraints. Although all of our four problems are NP-hard, we design approximation algorithms to solve them and obtain solutions with theoretically bounded gaps as compared to the optimal. The first three problems are in the communication area, and the last problem is in the transportation field. We claim the following specific contributions. Minimize maximum delay and average delay. First, we consider the setting of simultaneously minimizing the average travel time and the worst (largest) travel time of sending the set of data packets from source to destination. Existing results say that the two metrics of travel time cannot be minimized to be both within bounded-ratio gaps to the optimal. As a comparison, we design three different routing solutions, each of which can minimize the two metrics of travel time simultaneously within a constant bounded ratio-gap to the optimal, but at a cost of only delivering a portion of the data. Minimize Age-of-Information (AoI). Second, we consider the problem of minimizing a newly proposed travel-time-sensitive performance metric, i.e., AoI, which is the elapsed time since the generation of the last received data. Our AoI study differs from existing ones in that we are the first to consider a set of data and multi-path routing. We develop both an optimal algorithm with a pseudo-polynomial time complexity and an approximation framework with a polynomial time complexity. Maximize network utility. Third, we consider a more general setting with multiple source destination pairs. Each source incurs a utility that is a function of the experienced travel time or the achieved throughput to send data to its destination. Our objective is to maximize the aggregate utility under throughput requirements and travel time constraints. We develop a polynomial-time approximation algorithm, at the cost of violating constraints by up to constant-ratios. It is non-trivial to design such algorithms, as we prove that it is NPcomplete either to construct an optimal solution under relaxed delay constraints or relaxed throughput requirements, or to figure out a feasible solution with all constraints satisfied. Minimize fuel consumption for a heavy truck to timely fulfill multiple transportation tasks. Finally, we consider a truck and multiple transportation tasks in order, where each task requires the truck to pick up cargoes at a source timely, and deliver them to a destination timely. The need of coordinating task execution times is a new challenging design space for saving fuel in our multi-task setting, and it differentiates our study from existing ones under the single-task setting. We design an efficient heuristic. We characterize conditions under which the solution of our heuristic must be optimal, and further prove its performance gap as compared to the optimal in case the conditions are not met.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Xue. "Efficient Device to Device Communication Underlaying Heterogeneous Networks." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/4673.

Full text
Abstract:
Device-to-Device communications have the great potential to bring significant performance boost to the conventional heterogeneous network by reusing cellular resources. In cellular networks, Device-to-Device communication is defined as two user equipments in a close range communicating directly with each other without going through the base station, thus offloading cellular traffic from cellular networks. In addition to improve network spectral efficiency, D2D communication can also improve energy efficiency and user experience. However, the co-existence of D2D communication on the same spectrum with cellular users can cause severe interference to the primary cellular users. Thus the performance of cellular users must be assured when supporting underlay D2D users. In this work, we have investigated cross-layer optimization, resource allocation and interference management schemes to improve user experience, system spectral efficiency and energy efficiency for D2D communication underlaying heterogeneous networks. By exploiting frequency reuse and multi-user diversity, this research work aims to design wireless system level algorithms to utilize the spectrum and energy resources efficiently in the next generation wireless heterogeneous network.
APA, Harvard, Vancouver, ISO, and other styles
19

Oliveira, Rodrigo Ruas. "Toward cost-efficient Dos-resilient virtual networks with ORE : opportunistic resilience embedding." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/71908.

Full text
Abstract:
O atual sucesso da Internet vem inibindo a disseminação de novas arquiteturas e protocolos de rede. Especificamente, qualquer modificação no núcleo da rede requer comum acordo entre diversas partes. Face a isso, a Virtualização de Redes vem sendo proposta como um atributo diversificador para a Internet. Tal paradigma promove o desenvolvimento de novas arquiteturas e protocolos por meio da criação de múltiplas redes virtuais sobrepostas em um mesmo substrato físico. Adicionalmente, aplicações executando sobre uma mesma rede física podem ser isoladas mutuamente, propiciando a independência funcional entre as mesmas. Uma de suas mais promissoras vantagens é a capacidade de limitar o escopo de ataques, através da organização de uma infraestrutura em múltiplas redes virtuais, isolando o tráfego das mesmas e impedindo interferências. Contudo, roteadores e enlaces virtuais permanecem vulneráveis a ataques e falhas na rede física subjacente. Particularmente, caso determinado enlace do substrato seja comprometido, todos os enlaces virtuais sobrepostos (ou seja, alocados neste) serão afetados. Para lidar com esse problema, a literatura propõe dois tipos de estratégias: as que reservam recursos adicionais do substrato como sobressalentes, protegendo contra disrupções; e as que utilizam migração em tempo real para realocar recursos virtuais comprometidos. Ambas estratégias acarretam compromissos: o uso de recursos sobressalentes tende a tornar-se custoso ao provedor de infraestrutura, enquanto a migração de recursos demanda um período de convergência e pode deixar as redes virtuais inoperantes durante o mesmo. Esta dissertação apresenta ORE (Opportunistic Resilience Embedding – Mapeamento com Resiliência Oportunística), uma nova abordagem de mapeamento de redes para proteger enlaces virtuais contra disrupções no substrato físico. ORE é composto por duas estratégias: uma proativa, na qual enlaces virtuais são alocados em múltiplos caminhos para mitigar o impacto de uma disrupção; e uma reativa, a qual tenta recuperar, parcial ou integralmente, a capacidade perdida nos enlaces virtuais afetados. Ambas são modeladas como problemas de otimização. Ademais, como o mapeamento de redes virtuais é NP-Difícil, ORE faz uso de uma meta-heurística baseada em Simulated Annealing para resolver o problema de forma eficiente. Resultados numéricos mostram que ORE pode prover resiliência a disrupções por um custo mais baixo.
Recently, the Internet’s success has prevented the dissemination of novel networking architectures and protocols. Specifically, any modification to the core of the network requires agreement among many different parties. To address this situation, Network Virtualization has been proposed as a diversifying attribute for the Internet. This paradigm promotes the development of new architectures and protocols by enabling the creation of multiple virtual networks on top of a same physical substrate. In addition, applications running over the same physical network can be isolated from each other, thus allowing them to coexist independently. One of the main advantages of this paradigm is the use of isolation to limit the scope of attacks. This can be achieved by creating different, isolated virtual networks for each task, so traffic from one virtual network does not interfere with the others. However, routers and links are still vulnerable to attacks and failures on the underlying network. Particularly, should a physical link be compromised, all embedded virtual links will be affected. Previous work tackled this problem with two main strategies: using backup resources to protect against disruptions; or live migration to relocate a compromised virtual resource. Both strategies have drawbacks: backup resources tend to be expensive for the infrastructure provider, while live migration may leave virtual networks inoperable during the recovery period. This dissertation presents ORE (Opportunistic Resilience Embedding), a novel embedding approach for protecting virtual links against substrate network disruptions. ORE’s design is two-folded: while a proactive strategy embeds virtual links into multiple substrate paths in order to mitigate the initial impact of a disruption, a reactive one attempts to recover any capacity affected by an underlying disruption. Both strategies are modeled as optimization problems. Additionally, since the embedding problem is NP-Hard, ORE uses a Simulated Annealing-based meta-heuristic to solve it efficiently. Numerical results show that ORE can provide resilience to disruptions at a lower cost.
APA, Harvard, Vancouver, ISO, and other styles
20

Campos, Ciro Guillermo. "Développement de méthodes d'ordonnancement efficaces et appliquées dans un système de production mécanique." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0035/document.

Full text
Abstract:
L’évolution continue des environnements de production et l’augmentation des besoins des clients, demandent un processus de production plus rapide et efficace qui contrôle plusieurs paramètres en même temps. Nous nous sommes intéressés au développement de méthodes d’aide à la décision qui permettent d’améliorer l’ordonnancement de la production. L’entreprise partenaire (Norelem) fabrique des pièces de précision mécanique, il faut donc prendre en compte les différentes contraintes de ressources (humaines et d’outillage) existantes dans l’atelier de production.Nous avons abordé l’étude d’un atelier d’ordonnancement de type open shop ou chemin ouvert, où une tâche peut avoir de multiples séquences de production puisque l’ordre de fabrication n’est pas fixé et l’objectif à minimiser est le temps total de séjour. Des contraintes d’affectation de ressources humaines (multi-compétences) et de disponibilité d’outillage ont été prises en compte.Des modèles mathématiques linéaires et non-linéaires ont été développés pour décrire la problématique. Etant donné que les méthodes exactes sont limitées aux instances de petites tailles à cause des temps de calcul, des méthodes de résolution approchées ont été proposées et comparées. De plus, nous avons abordé l’optimisation multi-objectif en considérant trois objectifs, la minimisation du temps total de séjour et l’équilibrage de charge des ressources (humaines et machines).L’efficacité des méthodes est prouvée grâce à des tests sur des instances théoriques et l’application au cas réel
The continuous evolution of manufacturing environments and the growing of customer needings, leads to a faster and more efficient production process that controls an increasing number of parameters. This thesis is focused on the development of decision making methods in order to improve the production scheduling. The industrial partner (Norelem) produces standardized mechanical elements, so many different resource constraints (humans and tools) are presented in its workshop.We study an open shop scheduling problem where one job can follow multiple production sequences because there is no fixed production sequence and the objective function is to minimize the total flow time. In addition, multi-skilled personnel assignment and tool’s availability constraints are involved.Mathematical models: linear and non-linear formulations have been developed to describe the problem. Knowing the exact method limitations in terms of instance sizes because of the duration, heuristics methods have been proposed and compared. Besides that, the multi-objective optimization was exposed to deal with three objectives as total flow time minimization and workload balancing concerning both, humans and machines.The efficiency of these methods was proved by several theoretical instance tests and the application on the real industrial case
APA, Harvard, Vancouver, ISO, and other styles
21

Valdivia, Nereida Celina Llerena. "Roteamento de tráfego e alocação de recursos em redes ópticas WDM com base em economia de energia." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/18/18155/tde-06012015-161005/.

Full text
Abstract:
O crescimento do tráfego de serviços de telecomunicações tem aumentado o consumo de energia e, em consequência, aumentado as emissões de CO2 que tem efeitos nocivos sobre o meio ambiente. É assim que a economia de energia torna-se um fator chave no planejamento de redes de telecomunicações. Para garantir a disponibilidade e confiabilidade, as redes possuem arquitetura redundante e são projetadas para suportar a demanda de pico de tráfego. Redes com mecanismos de proteção como proteção dedicada de caminhos (DPP), proveem caminhos alternativos para cada demanda de conexão. Os elementos da rede que suportam esses caminhos estão em estado ativo (consumindo energia), apesar de, na maior parte do tempo, não transportarem tráfego efetivo. Um método para diminuir o gasto de energia é utilizar roteamento adaptado à carga real de tráfego baseado em modo suspenso (estado de baixo consumo de energia que pode passar a estado ativo rapidamente). Assim, o tráfego é roteado com vistas à maximizar a quantidade de componentes que são parte de caminhos de proteção, que podem ser postos em modo suspenso. Neste trabalho, as redes usadas para os testes são a rede europeia Cost239, a rede estadunidense UsNet e a rede brasileira Ipê. Abordamos o problema de economia de energia em redes WDM com DPP através de quatro estratégias de roteamento. Cada uma tem objetivos diferentes, a Shortest Path-DPP (SP-DPP) faz o roteamento por caminho mais curto, a Energy Aware-DPP (EA-DPP) aloca as demandas por enlaces que estejam ativos, a Energy Aware-DPP with Mixing (EA-DPP-MixS) evita que caminhos principais sejam roteados por enlaces que já são parte de caminhos de proteção e a Energy Aware-DPP with Differentation (EA-DPP-Dif) evita a mistura de caminhos por um mesmo enlace. Em nossas simulações computacionais observamos que a EA-DPP-Dif economiza energia de maneira eficiente, mas a probabilidade de bloqueio aumenta. A EA-DPP-MixS diminui o bloqueio em detrimento da energia economizada. Já a SP-DPP e a EA-DPP são menos eficientes na diminuição da energia consumida. É assim que propomos um roteamento com busca de recursos mais ampla, usando cada uma das estratégias. A proposta será chamada de roteamento intensivo. A EA-DPP-Dif-Intensivo diminui a probabilidade de bloqueio e economiza energia mediante modo suspenso. Neste trabalho, analisamos o desempenho das estratégias para cada uma das redes e avaliamos o impacto da energia economizada sobre a probabilidade de bloqueio. A proposta de roteamento i>intensivo diminui a energia consumida em até 50%, diminuindo a probabilidade de bloqueio. Porém, os resultados estão diretamente relacionados com a carga de rede e as características particulares da topologia de cada rede.
The growth of data traffic in telecommunication networks has increased energy consumption and hence increased CO2 emissions, with harmful effects on the environment. Thus, energy saving becomes a key and a differential factor when planning telecommunication networks. In order to guarantee availability and reliability, core networks have redundant architecture and are designed to support peak-hour traffic demand. Networks with dedicated path protection (DPP) mechanisms provide alternative paths for each connection request. Network elements supporting these paths are in active state (consuming energy), although most of the time they dont carry traffic. One technique to decrease energy waste is by adaptive real traffic routing using sleep mode (a low energy consumption state which is able to rapidly change to an active state). Thus, traffic is routed in order to maximize the amount of network components used by protection paths, which can be set in sleep mode. In this work, European Cost239, American UsNet and Brazilian Ipê networks were used in computational simulations. We addressed the energy saving problem in WDM networks with DPP through four routing strategies, each with different goals. The Shorthest Path-Dedicated Path Protection (SP-DPP) technique uses shortest path for routing, Energy Aware-Dedicated Path Protection (EA-DPP) allocates demands in active links, Energy Aware-Dedicated Path Protection with Mixing (EA-DPP-MixS) prevents primary paths to be formed by links that are already part of the protection paths and Energy Aware-Dedicated Path Protection with Differentation (EA-DPP-Dif) prevents mixing primary and protection paths through the same link. We observe that EA-DPP-Dif efficiently saved energy, however blocking probability has increased. EA-DPP-MixS reduced blocking rather than saved energy. At least, SP-DPP and EA-DPP are less efficient in reducing energy consumption. Hence, we propose a wider resource search routing, the in-depth routing, using each of these strategies. Thus, EA-DPP-Dif-In-depth decreased blocking probability while maintaining energy saving through sleep mode. In this work, we analyze the strategies performance for each network and evaluate the impact of energy saved on the blocking probability. Our in-depth routing strategy reduced the energy consumption up to 50%, decreasing blocking probability. However, the results are directly related with the network load and the specific properties of each network topology.
APA, Harvard, Vancouver, ISO, and other styles
22

Allybokus, Zaïd. "Algorithmes distribués dédiés au calcul de l’allocation alpha-équitable en temps réel dans les réseaux SDN." Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4038.

Full text
Abstract:
Dans cette thèse, nous étudions la conception d’algorithmes dédiés au calcul de l’allocation de ressources α-équitable en temps réel dans les réseaux Software Defined Networks (SDN) distribués. En premier lieu, nous définissons trois besoins majeurs établissant les enjeux des algorithmes en temps réel implémentable dans les contrôleurs distribués SDN. Ces enjeux sont la disponibilité de solutions faisables à tout moment, une qualité transitoire acceptable en termes d’écart à l’optimum, une convergence en un nombre raisonnable de tours de communications entre les différents contrôleurs, ainsi qu’une facilité des algorithmes à être massivement parallèles, indépendamment de l’architecture SDN du réseau. Nous utilisons les outils de l’Alternating Directions Method of Multipliers afin de définir une classe d’algorithmes qui, sans précédent, répondent simultanément à ces enjeux. À la lumière des propriétés structurelles du modèle de l’allocation α-fair, nous calculons une borne inférieure sur la solution optimale et l’utilisons afin d’ajuster le paramètre de pénalité du Lagrangien augmenté du problème dans le but d’optimiser la performance des algorithmes. Nous montrons que l’algorithme est capable de fonctionner en temps réel lorsque les exigences du trafic varient de façon plus ou moins brute. La variation des exigences du trafic est modelisée par la variation en temps réel de certains coefficients du modèle d’optimisation qui est résolu à la volée. Ces coefficients représentent en pratique des politiques de priorité variées au sein du trafic (paiement, type de trafic, nombre de connections à l’intérieur d’un chemin, etc). Ensuite, nous décrivons comment étendre l’algorithme à des scenarios réels avec des modifications minimes, afin de prendre en compte l’équilibrage en multi-chemin des flots et l’ajustement de la bande passante en temps réel. Par ailleurs, nous répondons au problème de partage de ressources α-équitable lorsque l’environnement admet des incertitudes sur la quantité de ressources disponibles sur chaque lien, connue uniquement au travers de fonctions de densités générales. L’axe prioritaire est alors, au lieu de la faisabilité, la notion de fiabilité. Nous concevons alors une heuristique qui affine une approximation extérieure du problème en se basant sur l’analyse de sensibilité du problème statique. En toute généralité, nous arrivons à fournir une solution fiable et acceptable en termes d’efficacité en résolvant quelques problèmes statiques
In this dissertation, we deal with the design of algorithms to tackle the α-fair resource allocation problem in real-time and distributed Software-Defined Networks (SDN). First, we define three major requirements that picture the challenges of real-time algorithms implementable in modern distributed SDN controllers. Those challenges are the ability to provide feasible resource allocations at all times, good transient solutions in terms of optimality gap that converge in an acceptable number of inter-controller communication rounds, and their ability of being massively parallelized independently of the network architecture. We use the Alternating Directions Method of Multipliers to design an algorithm that simultaneously, and unprecedentedly, tackles the three challenges. Motivated by a first study of the structural properties of the α-fair model, where we derive a lower bound on the optimal solution, we tune the penalty parameter of the augmented Lagrangian of the problem in order to optimize the algorithm’s performance. We show that the algorithm can function in real-time when the traffic requirements can vary more or less abruptly. The variation of the traffic requirements are modeled by real-time varying coefficients of the optimization model that is solved on-the-fly and may represent various prioritization policies of the traffic (payment, traffic type, number of connections within a tunnel, etc). Then, we describe how to extend the algorithm to real world use cases with limited modifications to cope with multi-path load balancing and online adjustments. Furthermore, we address the problem of α-fairness when the environment is uncertain and the available amount of resources over the network links is known only through general density functions. The main focus there is, instead of feasibility, the notion of safety. We design a heuristic that polishes an outer relaxation of the problem, based on the sensitivity analysis of the static problem. In general, we are able to provide a safe and acceptably efficient solution by solving several static problems
APA, Harvard, Vancouver, ISO, and other styles
23

Liao, Kewen. "Approximation algorithms for resource allocation optimization." Thesis, 2014. http://hdl.handle.net/2440/85975.

Full text
Abstract:
Nowadays, data storage, server replicas/mirrors, virtual machines, and various kinds of services can all be regarded as different types of resources. These resources play an important role in today’s computer world because of the continuing advances in information technology. It is usual that similar resources are grouped together at the same site, and can then be allocated to geographically distributed clients. This is the resource allocation paradigm considered in this thesis. Optimizing solutions to a variety of problems arising from this paradigm remains a key challenge, since these problems are NP-hard. For all the resource allocation problems studied in this thesis, we are given a set of sites containing facilities as resources, a set of clients to access these facilities, an opening cost for each facility, and a connection cost for each allocation of a facility to a client. The general goal is to decide the number of facilities to open at each site and allocate the open facilities to clients so that the total cost incurred is minimized. This class of the problems extends the classical NP-hard facility location problems with additional abilities to capture various practical resource allocation scenarios. To cope with the NP-hardness of the resource allocation problems, the thesis focuses on the design and analysis of approximation algorithms. The main techniques we adopt are linear programming based, such as primal-dual schema, linear program rounding, and reductions via linear programs. Our developed solutions have great potential for optimizing the performances of many contemporary distributed systems such as cloud computing, content delivery networks, Web caching, and Web services provisioning.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Chuan-Ren, and 王傳仁. "Application of Multi-objective Genetic Algorithms for Virtual Resource Allocation Scheduling Optimization." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/39187541474918647888.

Full text
Abstract:
碩士
國立高雄第一科技大學
系統資訊與控制研究所
100
In order to satisfy given SLA (service level agreements) as end-to-end QoS (quality of service) requirements in cloud computing environments by users, consequently, server requirements of IaaS (Infrastructure as a Service) rise up rapidly among cloud service. They lead to establish cost enormously. The performance of virtual technology is improved. Hence, using virtual technology can be solved efficiently above situation. Also for the sake of gratifying with the constraint on SLA (service level agreements) as end-to-end QoS (quality of service) requirements, we must consider the cost and time of execution at the same time. So we present a method of Multi-objective Genetic Algorithm to solve this problem.
APA, Harvard, Vancouver, ISO, and other styles
25

"Distributed stochastic algorithms for communication networks." Thesis, 2010. http://library.cuhk.edu.hk/record=b6075262.

Full text
Abstract:
Designing distributed algorithms for optimizing system-wide performances of large scale communication networks is a challenging task. The key part of this design involves a lot of combinatorial network optimization problems, which are computationally intractable in general and hard to approximate even in a centralized manner. Inspired by the seminal work of Jiang-Walrand, Markov approximation framework was proposed for synthesizing distributed algorithms for general combinatorial network optimization problems. To provide performance guarantees, convergence properties of these distributed algorithms are of significance.
First, we consider instances of the designed Markov chain over resource allocation algorithms. We focus on the convergence issues. We find several examples such that the related convergence results can be applied directly. These examples include optimal path (or tree) selection for wireline networks, optimal neighboring selection for peer-to-peer networks, and optimal channel (or power) assignment for wireless local area networks.
In this thesis, we first review Markov approximation framework and further develop this framework by studying convergence properties of distributed algorithms. These system-wide algorithms consist of the designed Markov chain and resource allocation algorithms. We concentrate on two general scenarios: the designed Markov chain over resource allocation algorithms and resource allocation algorithms over the designed Markov chain. With imprecise measurements of network parameters and without the time-scale separation assumption, we prove convergence to near-optimal solutions for both scenarios under mild conditions. Then we apply Markov approximation framework and associated convergence results to various combinatorial network optimization problems.
Second, we consider instances of resource allocation algorithms over the designed Markov chain. We focus on the system-wide performances. Two instances are investigated: cross-layer optimization for wireless networks with deterministic channel model and wireless networks with network coding. For both instances, guided by Markov approximation framework, we design distributed schemes to achieve maximum utilities. These schemes include primal-dual flow control algorithms, Markov chain based scheduling algorithms, and routing (or network coding) algorithms. Under time-dependent step sizes and update intervals, we show that these distributed schemes converge to the optimal solutions with probability one. Further, under constant step sizes and constant update intervals, we prove that these distributed schemes also converge to a bounded neighborhood of optimal solutions with probability one. These analytical results are validated by numerical results as well.
Shao, Ziyu.
Adviser: Shou Yen Robert Li.
Source: Dissertation Abstracts International, Volume: 73-03, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 134-140).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
26

"Optimal Resource Allocation in Social and Critical Infrastructure Networks." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.40712.

Full text
Abstract:
abstract: We live in a networked world with a multitude of networks, such as communication networks, electric power grid, transportation networks and water distribution networks, all around us. In addition to such physical (infrastructure) networks, recent years have seen tremendous proliferation of social networks, such as Facebook, Twitter, LinkedIn, Instagram, Google+ and others. These powerful social networks are not only used for harnessing revenue from the infrastructure networks, but are also increasingly being used as “non-conventional sensors” for monitoring the infrastructure networks. Accordingly, nowadays, analyses of social and infrastructure networks go hand-in-hand. This dissertation studies resource allocation problems encountered in this set of diverse, heterogeneous, and interdependent networks. Three problems studied in this dissertation are encountered in the physical network domain while the three other problems studied are encountered in the social network domain. The first problem from the infrastructure network domain relates to distributed files storage scheme with a goal of enhancing robustness of data storage by making it tolerant against large scale geographically-correlated failures. The second problem relates to placement of relay nodes in a deployment area with multiple sensor nodes with a goal of augmenting connectivity of the resulting network, while staying within the budget specifying the maximum number of relay nodes that can be deployed. The third problem studied in this dissertation relates to complex interdependencies that exist between infrastructure networks, such as power grid and communication network. The progressive recovery problem in an interdependent network is studied whose goal is to maximize system utility over the time when recovery process of failed entities takes place in a sequential manner. The three problems studied from the social network domain relate to influence propagation in adversarial environment and political sentiment assessment in various states in a country with a goal of creation of a “political heat map” of the country. In the first problem of the influence propagation domain, the goal of the second player is to restrict the influence of the first player, while in the second problem the goal of the second player is to have a larger market share with least amount of initial investment.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2016
APA, Harvard, Vancouver, ISO, and other styles
27

Liao, Ying-Tzong, and 廖英宗. "A Study of Global Optimization Algorithms for Back Propagation - A Case Study of Freeway Incident Resource Allocation." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/48289535504478894579.

Full text
Abstract:
碩士
淡江大學
運輸管理學系
88
Due to the significant learning ability, the Neural Network algorithm has been applied to all kinds of domains recently. However, the existence of local optimal points, using global searching algorithm to improve the performance for Back Propagation Network (BPN) becomes a common research topic. The investigation declared that Threshold Accepting (TA), Great Deluge Algorithm (GDA) and Simulated Annealing (SA) have better performance in solving problems with local optimal points. So, this study employs architecture of Neural Network as the kernel and combines with global optimization search algorithm to improve the performance of traditional BPN algorithm. Regarding the allocation process of rescue resources, the quality of current resource assignment processes is depended upon dispatcher’s work experiences. In usual, the assignments are completed by hand and responses must be made right away, so it's easy to happen the mismatches of the resource allocation in turn the needs of extra efforts for re-allocation. Although there is no any application instance of BPN in Taiwan now, using BPN can easily establish the mapping relations between accident information and the work load for the resource allocation. From this study, we can draw a few significant results as follows. 1.With fewer nodes, TA, GDA and SA have better performance in convergence for problem learning mechanism than the traditional Neural Network algorithm. 2.For single event, the modified algorithm has accuracy in average more than 80%. 3.To cope with the local optimal points and improve inferior solution, the global optimal search algorithm can improve the quality of convergent solution and save computation time. 4.Due to the ineffective incident management procedures, the needs to develop an automatic identification surveillance system using self learning BNP algorithm for incident management are the urgent task.
APA, Harvard, Vancouver, ISO, and other styles
28

Skowron, Piotr. "Resource allocation in selfish and cooperative distributed systems." Doctoral thesis, 2014.

Find full text
Abstract:
In this dissertation we take an algorithmic view on resource allocation problems in distributed systems. We present a comprehensive perspective by studying a variety of distributed systems---from abstract models of generic distributed systems, through more specific and detailed models, to real distributed computer systems. These systems differ with respect to the nature of the resource allocation problems and with respect to the methodologies required to effectively solve them.Effective resource allocation in distributed systems is a fundamental problem. Computer systems require good resource management mechanism to ensure expected functionality and expected quality of service. Even in our everyday life we often participate in resource allocation mechanisms. The examples range from cutting a cake at the birthday party to parliamentary elections and referendums (where the participating candidates can be viewed as resources).We start our discussion from considering a general computational model that describes the problem of selecting a collective set of items for the shared use by a group of agents. This model is very general; it does not specify what is an agent and what is an item, and, thus, can be applied to many different scenarios. Indeed, we show that our model captures many real-life resource allocation problems. For instance, our algorithms for this general model can be applied to recommendation systems, e.g., to select a collection of movies for a plane, or to allocate students to sport classes based on their preferences. Our algorithms are also applicable to the problem of finding a proportional representation of a society in some collective decision-making body, i.e., to find winners in some modern parliamentary election systems. We analyze multiple variants of our general problem of selecting a collective set of items. Next, we move to more specific models of computer systems. In these models we introduce several new elements such as jobs, processors, and the network. Each of these elements can be further described by a set of parameters. For instance, jobs have their release times, resource requirements, and durations; further, their durations might depend on the processors on which they are run; the processors might be identical or heterogeneous. The network connections can be described, e.g., by bandwidth and communication latencies. Consequently, in these models we focus on several variants of a more general problem. We ask how to schedule jobs to minimize their aggregated completion time, with specific variants of this question depending on the aggregation method and on the characteristics of the elements of the model. We establish computational complexity of variants of this scheduling problem and, in particular, we show effective algorithms optimizing jobs' schedules. We also provide analysis of other properties of our algorithms, such as their fault-tolerance and their game-theoretic stability.In the last part of this dissertation we consider resource allocation problems in real implementations of complex distributed systems. We consider two storage-based systems: HYDRAstor, which is a commercial, distributed, scalable, high-performance, secondary storage system targeted for the enterprise market, and our prototype implementation of a P2P backup system. We explain how the design of resource allocation mechanisms for such complex systems is different from our previous approaches. In this part we present and discuss relatively more complex resource allocation mechanisms; these mechanisms consist of multiple elements and even of whole resource allocation subsystems. Further, they aim at achieving multiple (sometimes contradicting) goals.
W poniższej rozprawie badamy algorytmy zarządzania zasobami w systemach rozproszonych. Przedstawiamy kompleksowe spojrzenie na tę tematykę: rozważamy różne systemy---od ogólnych, abstrakcyjnych modeli, przez bardziej konkretne, dedykowane modele, po rzeczywiste systemy rozproszone. Rozważane systemy różnią się specyfiką problemów zarządzania zasobami oraz metodologią, którą jest dla tych problemów najbardziej adekwatna.Efektywne zarządzanie zasobami w systemach rozproszonych jest problemem o fundamentalnym znaczeniu. Systemy komputerowe wymagają dobrych mechanizmów zarządzania zasobami aby zapewnić odpowiednią jakość usług dla użytkowników. Również w naszym codziennym życiu często uczestniczymy w mechanizmach zarządzania zasobami. Przykłady takich mechanizmów to między innymi podział tortu na przyjęciu urodzinowym, czy referenda, a nawet wybory parlamentarne (w tym przypadku możemy utożsamiać kandydatów startujących w wyborach z zasobami).W pierwszej części rozprawy rozważamy ogólny, abstrakcyjny model który opisuje problem wyboru podzbioru pewnych obiektów, które następnie będą współdzielone przez grupę użytkowników. Ten model jest bardzo ogólny ponieważ nie specyfikujemy kim (lub czym) dokładnie jest użytkownik i czym dokładnie są owe obiekty. W rezultacie, rozwiązania oparte o ten model możemy zaaplikować do wielu rzeczywistych problemów, takich jak przydział studentów, w oparciu o ich preferencje, do uniwersyteckich kursów, czy znajdowanie właściwych rekomendacji. Takie rekomendacje mogą dotyczyć, na przykład, wyboru zbioru filmów dostępnych na pokładzie samolotu. Nasze algorytmy znajdują takżę zastosowanie w znajdowaniu proporcjonalnej reprezentacji dla grupy ludzi, czyli np. aby znaleźć zwycięzców w niektórych nowoczesnych systemach wyborów parlamentarnych. W niniejszej rozprawie analizujemy wiele specyficznych wariantów tego ogólnego zagadnienia.W drugiej częsci rozprawy rozważamy bardziej specyficzne modele opisujące systemy komputerowe. W tych modelach pojawiają się nowe elementy, takie jak zadania, procesory, czy sieć komputerowa. Każdy z tych elementów może być opisany przez zbiór parametrów: zadania mają swoje czasy powstania, wymagania zasobów, czy czasy wykonania. Długość trwania zadania może ponadto zależeć od rodzaju procesora na którym zadanie zostało uruchomione: procesory mogą być identyczne lub heterogeniczne. Połączenia sieciowe mogą być opisane przez przepustowość lub/i latencję komunikacji. Naturalnie w tych modelach zadajemy również bardziej specyficzne pytania. Pytamy jak uszeregować zadania, aby zminimalizować ich zagregowany czas zakonczenia. Specyficzne warianty tego pytania róznią się w zależności od metody agregacji oraz w zależności od cech charakterystycznych wybranych elementów modelu. W rozważanych modelach badamy złożoność obliczeniową różnych wariantów problemu szeregowania, w szczególności pokazując efektywne algorytmy do optymalizacji uszeregowania zadań. W tej części rozprawy analizujemy również inne cechy naszych algorytmów, takie jak odporność na błędy czy (teorio-growa) stabilność.W ostatniej części rozprawy rozważamy problemy zarządzania zasobami w rzeczywistych, złożonych, komputerowych systemach rozproszonych. Rozważamy dwa rzeczywiste systemy przechowywania danych: HYDRAstor, który jest komercyjnym, rozproszonym, skalowalnym systemem przechowywania danych, oraz naszą prototypową implemenację systemu do tworzenia kopii zapasowych danych, opartego o architekturę P2P. Wyjaśniamy czym różni się projektowanie mechanizmów zarządzania zasobami dla takich systemów od poprzednio rozważanych przypadków. W tej części prezentujemy relatywnie bardziej złożone mechanizmy zarządzania zasobami: mechanizmy te składają się z wielu elementów, a nawet z wielu podsystemów zarządzania zasobami. Co więcej takie podsystemy mogą mieć czasami sprzeczne ze sobą cele.
APA, Harvard, Vancouver, ISO, and other styles
29

Xia, Qiufen. "Cost-Effective Resource Allocation and Throughput Maximization in Mobile Cloudlets and Distributed Clouds." Phd thesis, 2017. http://hdl.handle.net/1885/117056.

Full text
Abstract:
With the advance in communication networks and the use explosion of mobile devices, distributed clouds consisting of many small and medium datacenters in geographical locations and cloudlets defined as "mini" datacenters are envisioned as the next-generation cloud computing platform. In particular, distributed clouds enable disaster-resilient and scalable services by scaling the services into multiple datacenters, while cloudlets allow pervasive and continuous services with low access delay by further enabling mobile users to access the services within their proximity. To realize the promises provided by distributed clouds and mobile cloudlets, it is urgently to optimize various system performance of distributed clouds and cloudlets, such as system throughput and operational cost by developing efficient solutions. In this thesis, we aim to devise novel solutions to maximize the system throughput of mobile cloudlets, and minimize the operational costs of distributed clouds, while meeting the resource capacity constraints and users' resource demands. This however poses great challenges, that is, (1) how to maximize the system throughput of a mobile cloudlet, considering that a mobile cloudlet has limited resources to serve energy-constrained mobile devices, (2) how to efficiently and effectively manage and evaluate big data in distributed clouds, and (3) how to efficiently allocate the resources of a distributed cloud to meet the resource demands of various users. Existing studies mainly focused on implementing systems and lacked systematic optimization methods to optimize the performance of distributed clouds and mobile cloudlets. Novel techniques and approaches for performance optimization of distributed clouds and mobile cloudlets are desperately needed. To address these challenges, this thesis makes the following contributions. We firstly study online request admissions in a cloudlet with the aim of maximizing the system throughput, assuming that future user requests are not known in advance. We propose a novel admission cost model to accurately model dynamic resource consumption, and devise efficient algorithms for online request admissions. We secondly study a novel collaboration- and fairness-aware big data management problem in a distributed cloud to maximize the system throughput, while minimizing the operational cost of service providers, subject to resource capacities and users' fairness constraints, for which, we propose a novel optimization framework and devise a fast yet scalable approximation algorithm with an approximation ratio. We thirdly investigate online query evaluation for big data analysis in a distributed cloud to maximize the query acceptance ratio, while minimizing the query evaluation cost. For this problem, we propose a novel metric to model the costs of different resource consumptions in datacenters, and devise efficient online algorithms under both unsplittable and splittable source data assumptions. We fourthly address the problem of community-aware data placement of online social networks into a distributed cloud, with the aim of minimizing the operational cost of the cloud service provider, and devise a fast yet scalable algorithm for the problem, by leveraging the close community concept that considers both user read rates and update rates. We also deal with social network evolutions, by developing a dynamic evaluation algorithm for the problem. We finally evaluate the performance of all proposed algorithms in this thesis through experimental simulations, using real and/or synthetic datasets. Simulation results show that the proposed algorithms significantly outperform existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
30

Φραίμης, Ιωάννης. "Τεχνικές βελτιστοποίησης της ποιότητας των παρεχομένων υπηρεσιών (QoS) με έλεγχο κρίσιμων ηλεκτρικών και ηλεκτρομαγνητικών παραμέτρων στα σύγχρονα ασύρματα τηλεπικοινωνιακά συστήματα." Thesis, 2012. http://hdl.handle.net/10889/5546.

Full text
Abstract:
Στην παρούσα διδακτορική διατριβή προτείνονται τεχνικές για την βελτιστοποίηση της ποιότητας των παρεχομένων υπηρεσιών στους χρήστες σύγχρονων ασύρματων τηλεπικοινωνιακών συστημάτων που ως τεχνολογίες πρόσβασης έχουν την πολλαπλή πρόσβαση ορθογωνικής διαίρεσης συχνότητας και την πολλαπλή πρόσβαση διαίρεσης κώδικα. Οι τεχνικές που αναπτύχθηκαν αφορούν επαναληπτικούς αλγόριθμους κατανομής των διαθέσιμων ραδιοπόρων και εφαρμόζοναι κυρίως στην κατερχόμενη των ασύρματων συστημάτων. Ως παράμετροι της ποιότητας των παρεχόμενων υπηρεσιών θεωρούνται: το ελάχιστο απαιτούμενο επίπεδο ρυθμού μετάδοσης των δεδομέων, ο ρυθμός των λανθασμέων bit, και η ελάχιστη απαιτούμενη ποσότητα ραδιοπόρων σε κάθε χρήστη. Η αξιολόγηση των τεχνικών που προτείνονται γίνεται μέσω δεικτών της απόδοσής τους, οι οποίοι είναι: η πιθανότητα παραβίασης της ποιότητας της υπηρεσίας, ο δείκτης δικαιοσύνης του συστήματος, ο ρυθμός μετάδοσης δεδομένων στα άκρα της κυψέλης και η χωρητικότητα της κυψέλης. Για την εξαγωγή των δεικτών αυτών είναι απαραίτητα στατιστικά δεδομένα, τα οποία συλλέγονται μέσα από μεγάλο αριθμό προσομοιώσεων.
This doctoral thesis proposes QoS optimization techniques in modern wireless telecommunication systems, whereby orthogonal frequency division multiple access and code division are used. The proposed techniques are iterative resource allocation algorithms which are mainly suitable for the downlink of wireless networks. The minimum required level of data rate, the bit error rate and the minimum number of resources per user are considered as quality of service parameters. The validation of the proposed techniques is done through the performance of performance metrics like the : the quality of service violation probability, the system fairness index, the cell-edge data rate and the cell capacity. Statistical data are required which are collected through extensive simulation
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography