Dissertations / Theses on the topic 'Congestion control algorithms'

To see the other types of publications on this topic, follow the link: Congestion control algorithms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Congestion control algorithms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Edwan, Talal A. "Improved algorithms for TCP congestion control." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Reliable and efficient data transfer on the Internet is an important issue. Since late 70's the protocol responsible for that has been the de facto standard TCP, which has proven to be successful through out the years, its self-managed congestion control algorithms have retained the stability of the Internet for decades. However, the variety of existing new technologies such as high-speed networks (e.g. fibre optics) with high-speed long-delay set-up (e.g. cross-Atlantic links) and wireless technologies have posed lots of challenges to TCP congestion control algorithms. The congestion control research community proposed solutions to most of these challenges. This dissertation adds to the existing work by: firstly tackling the highspeed long-delay problem of TCP, we propose enhancements to one of the existing TCP variants (part of Linux kernel stack). We then propose our own variant: TCP-Gentle. Secondly, tackling the challenge of differentiating the wireless loss from congestive loss in a passive way and we propose a novel loss differentiation algorithm which quantifies the noise in packet inter arrival times and use this information together with the span (ratio of maximum to minimum packet inter arrival times) to adapt the multiplicative decrease factor according to a predefined logical formula. Finally, extending the well-known drift model of TCP to account for wireless loss and some hypothetical cases (e.g. variable multiplicative decrease), we have undertaken stability analysis for the new version of the model.
2

Bhandarkar, Sumitha. "Congestion control algorithms of TCP in emerging networks." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Voice, Thomas David. "Stability of congestion control algorithms with multi-path routing and linear stochastic modelling of congestion control." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lai, Chengdi, and 賴成迪. "Systematic design of internet congestion control : theory and algorithms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Internet is dynamically shared by numerous flows of data traffic. Network congestion occurs when the aggregate flow rate persistently exceeds the network capacity, leading to excessive delivery delay and loss of user data. To control network congestion, a flow needs to adapt the sending rate to its inferred level of congestion, and a packet switch needs to report its local level of congestion. In this framework of Internet congestion control, it is important for flows to react promptly against congestion, and robustly against interfering network events resembling congestion. This is challenging due to the highly dynamic interactions of various network components over a global scale. Prior approaches rely predominantly on empirical observations in experiments for constructing and validating designs. However, without a careful, systematic examination of all viable options, more efficient designs may be overlooked. Moreover, experimental results have limited applicability to scenarios beyond the specific experimental settings. In this thesis, I employ a novel, systematic design approach. I formalize the design process of Internet congestion control from a minimal set of empirical observations. I prove the robustness and optimality of the attained design in general settings, and validate these properties in practical experimental settings. First, I develop a systematic method for enhancing the robustness of flows against interfering events resembling congestion. The class of additive-increase-multiplicative-decrease (AIMD) algorithms in Transmission Control Protocol (TCP) is the set of dominant algorithms governing the flow rate adaptation process. Over the present Internet, packet reordering and non-congestive loss occur frequently and are misinterpreted by TCP AIMD as packet loss due to congestion. This leads to underutilization of network resources. With a complete, formal characterization of the design space of TCP AIMD, I formulate designing wireless TCP AIMD as an optimal control problem over this space. The derived optimal algorithm attains a significant performance improvement over existing enhancements in packet-level simulation. Second, I propose a novel design principle, known as pricing-link-by-time (PLT), that specifies how to set the measure of congestion, or “link price”, at a router to provide prompt feedback to flows. Existing feedback mechanisms require sophisticated parameter tuning, and experience drastic performance degradation with improperly tuned parameters. PLT makes parameter tuning a simple, optional process. It increases the link price as the backlog stays above a threshold value, and resets the price once the backlog goes below the threshold. I prove that such a system exhibits cyclic behavior that is robust against changes in network environment and protocol parameters. Moreover, changing the threshold value can control delay without undermining system performance. I validate these analytical results using packet-level simulation. The incremental deployment of various enhancements have made Internet congestion control highly heterogeneous. The final part of the thesis studies this issue by analyzing the competition among flows with heterogeneous robustness against interfering network events. While rigorous theories have been a major vehicle for understanding system designs, the thesis involves them directly in the design process. This systematic design approach can fully exploit the structural characteristics, and lead to generally applicable, effective solutions.
published_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
5

Chen, Hanliu. "Performance evaluation and analysis of datagram congestion control algorithms in IP networks." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Providing a TCP-friendly congestion control protocol to support multimedia applications has been a great challenge for many researchers. DCCP (Datagram Congestion Control Protocol) is a promising protocol that was proposed recently. This thesis proposes to use VCP (Variable-structure congestion Control Protocol) to overcome the shortcomings of RED (Random Early Detection). We make mathematical analysis and run OPNET simulation to validate DCCP/VCP congestion control strategy. We have also conducted some performance analysis and made comparison with some other related traffic congestion control strategies under both wired and WiMAX environments. The simulation results show that DCCP/VCP can maintain a shorter buffer queue size than the other congestion control strategies. DCCP/VCP also provides zero packet drop rate which solves the random packet loss problem in RED router.
6

Jia, Guihua. "Performance evaluation of congestion control protocols and loss differentiation algorithms over wireless networks." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Congestion control protocols for wireless networks must be efficient, TCP-friendly, and robust to random wireless loss. Based on these criteria we have evaluated several different congestion control protocols, such as TCP Westwood, TFRC, MULTFRC, RAP, and IFTP. In wireless network environments, most of these protocols do not work well since wireless losses are counted as congestion losses. Therefore, it is necessary to extend these congestion control protocols with end-to-end Loss Differentiation Algorithms (LDA). We evaluate several existing LDA schemes, including Biaz, mBiaz, Spike, ZigZag, ZBS, PLC, SPLD, and TD, with simulation results showing different drawbacks for each scheme. We thus propose a new LDA scheme: the mSpike scheme. The mSpike scheme classifies the loss type according to the mean and deviation of the relative one-way trip time. The simulation results show that the mSpike scheme has better performance and fewer problems in most of the situations evaluated. We also test the combination of MULTFRC and mSpike under different wireless lossy environments. The simulation results show that they have a high utilization of the available bandwidth. Therefore, this combination would be a good choice for applications with high bandwidth requirements.
7

Özbay, Kaan. "A framework for dynamic traffic diversion during non-recurrent congestion: models and algorithms." Diss., Virginia Tech, 1996. http://hdl.handle.net/10919/39210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Real-time control of traffic diversion during non-recurrent congestion continues to be a challenging topic. Especially, with the advent of Intelligent Transportation Systems (ITS), the need for models and algorithms that will control the diversion in real-time, responding to the current traffic conditions has become evident. Several researchers have tried to solve this on-line control problem by adopting different approaches such as, expert systems, feedback control, and mathematical programming. In order to ensure the effectiveness of real-time traffic diversion, an implementation framework capable of predicting the impact of the incident on the traffic flow, generating feasible alternate routes in real-time, and controlling traffic in order to achieve a pre-set goal based on a system optimal or a user equilibrium concept is required. In this dissertation, a framework that would satisfy these requirements is adopted consisting of a "diversion initiation module", a "diversion strategy planning module", and a "control and routing module" which determines the route guidance commands in real-time. The incident duration data collected by the Northern Virginia incident management agencies is analyzed to determine major factors that affect the incident clearance duration. Next, prediction/decision trees are developed for different types of incidents. Based on the validation of these trees using the data that is not employed for the development of the trees, it is found that they perform well for the majority of the incidents. A simple deterministic queuing approach is used to predict the delays that will be caused by the incident for which the clearance duration is predicted using the prediction/decision trees. The diversion strategy planning module, Network Generator, is developed as a knowledge based expert system that uses simple expert rules in conjunction with historical and realtime data to determine the incident impact zone, and to eliminate links that are not suitable for diversion. Finally, it generates alternate routes for diversion using this modified network. Network generator is tested using simulation on a small portion of the Fairfax network. Finally, feedback control models for dynamic traffic routing models, both in distributed and lumped parameter settings, are developed. Methods for developing controllers for these models are also discussed. Two heuristic and analytic feedback controllers for the space discretized lumped parameter models are developed and their effectiveness for realtime traffic control is shown by simulating several scenarios on a simple network. An analytic feedback controller is also designed using a feedback linearization technique for the space discretized model. This controller also performed very well during simulations of various scenarios and proved to be an effective solution to this feedback control problem.
Ph. D.
8

Braga, Andrà Ribeiro. "Controle de congestionamento para voz sobre IP em HSDPA." Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
nÃo hÃ
O crescimento do nÃmero dos usuÃrios do serviÃo de Voice over IP(VoIP) faz dele o serviÃo com o maior interesse de ser provido por operadoras de telefonia celular. Por outro lado, este demanda um controle de Quality of Service (QoS) bastante rÃgido, o que torna-se mais complicado em redes sem fio, porque alÃm de congestionamentos na rede, os pacotes podem ser perdidos devido à erros nas transmissÃes no enlace de rÃdio. Dentro deste paradigma, estratÃgias de controle de congestionamento aparecem como uma boa soluÃÃo para lidar com as garantias de QoS em situaÃÃes de sobrecarga do sistema, onde os recursos se encontram exauridos e os requerimentos de qualidade se encontram ameaÃados. Este trabalho consiste na avaliaÃÃo de algoritmos de controle de congestionamento objetivando um aumento de capacidade e das garantias de QoS para serviÃos de voz. Os algoritmos avaliados neste trabalho sÃo os escalonamentos de pacotes e os controles de admissÃo. A anÃlise em cenÃrios de serviÃos mistos composto por usuÃrios VoIP e Web tambÃm està contida neste trabalho. O maior foco està no controle do atraso de pacote, jà que este à um requerimento crucial para serviÃos de tempo-real, como o VoIP. Os resultados mostram que um arcabouÃo de controle de congestionamento projetado para este serviÃo à capaz de melhorar o desempenho do sistema e mitigar os efeitos de congestionamento da rede. No cenÃrio de serviÃos mistos, os algoritmos sÃo capazes de efetuar reserva de recursos dependendo da prioridade definida para cada serviÃo, levando a um aumento na qualidade percebida pelo serviÃo mais sensÃvel atravÃs de uma leve degradaÃÃo no serviÃo mais robusto.
The growth in the number of Voice over IP(VoIP) users on the internet makes it the service with the highest interest to be provided by cellular operators. On the other hand, it demands very strict Quality of Service (QoS) control, which becomes even more complicated in wireless networks, because packets can be lost due to radio link transmission erros, as well as networks congestion. Within this paradigm, congestion control strategies appear as a good solution to cope with QoS guarantees under high loads, where the resources are exhausted and the service quality is threatened. This works comprises the evaluation of congestion control algorithms aiming to improve system capacity and QoS guarantees for speech users. The evaluated alagorithms within this work are packet scheduling and admission control. The analysys in mixed services scenarios composed of VoIP and Web users is also provid in this works. The main focus of the framework is to control the packet delay, since it is a crucial requirement for real-time services. The results show thata suitable congestion control framework is able to provid perfomace improvements and mitigation of the the effects from overloaded conditions. In the mixed services scenario, the algorithms are capable to perform resource reservation depending on the priority defined to each service, leanding to an increase in the quality of more sensitive service by degrading the more robust service
9

TRAVERSO, STEFANO. "Design of Algorithms and Protocols for Peer-To-Peer Streaming Systems." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2497192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Peer-to-Peer Streaming (P2P-TV) systems have been studied in the literature for some time and they are becoming popular among users as well. P2P-TV systems target the real time delivery of a video stream, therefore posing different challenges compared to more traditional peer-to-peer applications such as file sharing (BitTor-rent) or VoIP (Skype). This document focuses on mesh based P2P-TV systems in which the peers form a generic overlay topology at application level upon which peers exchange small “chunks” of video. In particular, we study two problems related with this kind of systems: i) how to induce peers to share their available resources – such as their available upload bandwidth – in a totally automatic and distributed way; ii) how to localize P2P-TV traffic in order to lower the load on the underlying transport network without impairing the quality of experience (QoE) perceived by users. Goal i) can be achieved playing on two key aspects of P2P-TV systems that are: • the design of the trading phase needed to exchange chunks among neighbors; • the strategy adopted by peers to choose the neighbors to connect with, i.e., the policy employed to build and maintain the overlay topology at application level. The former task has been successfully accomplished with the development of algorithms that aim at adapting the rate at which peers offer chunks to their neighbors to both peer’s available upload bandwidth and to the system demand. The results presented in this document show that the automatic adjustment of transmission rate to available upload capacity reduce delivery delays of chunks, thus improving the experience of users. Focusing on the latter problem, we prove that the topological properties of the overlay have a deep effect on both users’ QoE and network impact. We developed a smart, flexible and fully distributed algorithm for neighbors selection and implemented it in a real P2P-TV client. This let us compare several different strategies for overlay construction in a large campaign of test-bed experiments. Results show that we can actually achieve the goal of leading peers to efficiently share their available resources – goal i) – while keeping a good degree of traffic localization, hence lowering the load on the underlying network – goal ii). Furthermore, our experimental results show that a proper selection of the neighborhood leads to a win-win situation where the performance of the application and QoE are both improved, while the network stress is nicely reduced.
10

Roverso, Roberto. "A System, Tools and Algorithms for Adaptive HTTP-live Streaming on Peer-to-peer Overlays." Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, adaptive HTTP streaming protocols have become the de facto standard in the industry for the distribution of live and video-on-demand content over the Internet. In this thesis, we solve the problem of distributing adaptive HTTP live video streams to a large number of viewers using peer-to-peer (P2P) overlays. We do so by assuming that our solution must deliver a level of quality of user experience which is the same as a CDN while trying to minimize the load on the content provider’s infrastructure. Besides that, in the design of our solution, we take into consideration the realities of the HTTP streaming protocols, such as the pull-based approach and adaptive bitrate switching. The result of this work is a system which we call SmoothCache that provides CDN-quality adaptive HTTP live streaming utilizing P2P algorithms. Our experiments on a real network of thousands of consumer machines show that, besides meeting the the CDN-quality constraints, SmoothCache is able to consistently deliver up to 96% savings towards the source of the stream in a single bitrate scenario and 94% in a multi-bitrate scenario. In addition, we have conducted a number of pilot deployments in the setting of large enterprises with the same system, albeit tailored to private networks. Results with thousands of real viewers show that our platform provides an average offloading of bottlenecks in the private network of 91.5%. These achievements were made possible by advancements in multiple research areas that are also presented in this thesis. Each one of the contributions is novel with respect to the state of the art and can be applied outside of the context of our application. However, in our system they serve the purposes described below. We built a component-based event-driven framework to facilitate the development of our live streaming application. The framework allows for running the same code both in simulation and in real deployment. In order to obtain scalability of simulations and accuracy, we designed a novel flow-based bandwidth emulation model. In order to deploy our application on real networks, we have developed a network library which has the novel feature of providing on-the-fly prioritization of transfers. The library is layered over the UDP protocol and supports NAT Traversal techniques. As part of this thesis, we have also improved on the state of the art of NAT Traversal techniques resulting in higher probability of direct connectivity between peers on the Internet. Because of the presence of NATs on the Internet, discovery of new peers and collection of statistics on the overlay through peer sampling is problematic. Therefore, we created a peer sampling service which is NAT-aware and provides one order of magnitude fresher samples than existing peer sampling protocols. Finally, we designed SmoothCache as a peer-assisted live streaming system based on a distributed caching abstraction. In SmoothCache, peers retrieve video fragments from the P2P overlay as quickly as possible or fall back to the source of the stream to keep the timeliness of the delivery. In order to produce savings, the caching system strives to fill up the local cache of the peers ahead of playback by prefetching content. Fragments are efficiently distributed by a self-organizing overlay network that takes into account many factors such as upload bandwidth capacity, connectivity constraints, performance history and the currently being watched bitrate.

QC 20131122

11

Figueiredo, Ricardo Nogueira de. "Avaliação de algoritmos de controle de congestionamento como controle de admissão em um modelo de servidores web com diferenciação de serviços." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-18052011-112317/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Esta dissertação apresenta a construção de um protótipo de servidor Web distribuído, baseado no modelo de servidor Web com diferenciação de serviços (SWDS) e a implementação e avaliação de algoritmos de seleção, utilizando o conceito de controle de congestionamento para requisições HTTP. Com isso, além de implementar uma plataforma de testes, este trabalho também avalia o comportamento de dois algoritmos de controle de congestionamento. Os dois algoritmos estudados são chamados de Drop Tail e RED (Random Early Detection), no qual são bastante difundidos na literatura científica e aplicados em redes de computadores. Os resultados obtidos demostram que, apesar das particularidades de cada algoritmo, existe uma grande relação entre tempo de resposta e a quantidade de requisições aceitas
This MSc dissertation presents the implementation of a prototype for a distributed web server based on the SWDS, a model for a web server with service differentiation, and the implementation and evaluation of selection algorithms adopting the concept of congestion control for HTTP requests. Thus, besides implementing a test platform this work also evaluates the behavior of two congestion control algorithms. The two algorithms studied are the Drop Tail and the RED (Random Early Detection), which are frequently discussed in the scientific literature and widely applied in computer networks. The results obtained show that, although the particularities of each algorithm, there is a strong relation between the response times and the amount of requests accepted in the server
12

Tlaiss, Ziad. "Automated network packet traces analysis methods for fault recognition and TCP flavor identification." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ces dernières années, le domaine du dépannage réseau a suscité un intérêt particulier de la part des chercheurs en raison de la complexité et de l’importance de cette tâche. Le travail présenté dans cette thèse se concentre sur l’automatisation du dépannage réseau à l’aide de mesures de performance extraites des captures de paquets. La première contribution de cette thèse réside dans l’extraction de caractéristiques permettant d’identifier la cause fondamentale d’une anomalie en analysant des traces de paquets TCP montrant des connexions médiocres. Nous avons classé quatre causes de dégradation fréquemment observées : les problèmes de transmission, les problèmes de congestion, les problèmes de gigue et les limitations d’application. La deuxième contribution de cette thèse réside dans le développement d’une méthode automatisée pour détecter l’instant de sortie de l’état Slow-Start. L’importance de cette méthode réside dans le gain de temps précieux dans l’analyse des problèmes réseau, étant donné que l’état Slow-Start est un indicateur clé pour le diagnostic des défauts. La troisième contribution de cette thèse consiste en l’identification de l’algorithme de contrôle de congestion BBR. L’objectif principal est de détecter si un contrôle de l’envoi des paquets (’pacing’) est utilisé dans une connexion TCP. Cette méthode repose sur la modélisation de la distribution de la durée de l’inter-paquet pendant l’état Slow-Start. L’objectif est de reconnaître les distributions monomodales de l’inter-paquet dans le cas de BBR par rapport aux distributions à deux composantes mélangées dans le cas de CUBIC
In recent years, the field of network troubleshooting has garnered significant attention from researchers due to the complexity and importance of this task. The work presented in this thesis focuses on automating network troubleshooting using performance metrics extracted from packet captures. The first contribution of this thesis lies in extracting features to identify the root cause of an anomaly by analyzing TCP packet traces with bad performance. We have categorized four frequently observed causes of degradation: transmission problems, congestion problems, jitter problems, and application-limited problems. The second contribution of this thesis involves developing an automated method to detect the moment of exiting the Slow-Start state. The significance of this method lies in saving valuable time in the analysis of network degradation, as the Slow-Start state serves as a key indicator for fault diagnosis. The third contribution of this thesis revolves around identifying the BBR congestion control algorithm. The primary goal of our approach is to detect whether packet pacing is employed in a TCP connection. This method relies on modeling the distribution of inter-packet duration during the Slow-Start state. The objective is to distinguish unimodal distributions of inter-packet intervals in the case of BBR compared to mixed two component distributions in the case of CUBIC
13

Soua, Ahmed. "Vehicular ad hoc networks : dissemination, data collection and routing : models and algorithms." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00919774.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Each day, Humanity loses thousands of persons on roads when they were traveling to work, to study or even to distract. The financial cost of these injuries is also terrifying: Some statistics evaluate the financial cost of vehicle accidents at 160 billion Euro in Europe each year. These alarming figures have driven researchers, automotive companies and public governments to improve the safety of our transportation systems and communication technologies aiming at offering safer roads and smooth driving to human beings. In this context, Vehicular Adhoc Networks, where vehicles are able to communicate with each others and with existent road side units, emerge as a promising wireless technology able to enhance the vision of drivers and offer larger telematic horizon. VANETs promising applications are not only restricted to road safety but span from vehicle trafficoptimization like flow congestion control to commercial applications like file sharing and internet access. Safety applications require that their alert information is propagated to the concerned vehicles (located in the hazardous zone) with little delay and high reliability. For these reasons, this category of applications is considered as delay sensitive and broadcast-oriented nature. While classical blind flooding is rapid, its major drawback is its huge bandwidth utilization. In this thesis, we are interested on enhancing vehicular communications under different scenarios and optimizations: First, We focus on deriving a new solution (EBDR) to disseminate alert messages among moving vehicles while maintaining it efficient and rapid. Our proposal is based on directional antennas to broadcast messages and a route guidance algorithm to choose the best path for the packets. Findings confirmed the efficiency of our approach in terms of probability of success and end-to-end delays. Moreover, in spite of the broadcast nature of the proposed technique, all transmissions stop very soon after the arrival of a packet to its destination representing a strong feature in the conception of EBDR. Second, we propose a novel mathematical framework to evaluate the performance of EBDR analytically. Although most of the proposed techniques present in literature use experimental or simulation tools to defend their performance, we rely here on mathematical models to confirm our achieved results. Our proposed framework allows to derive meaningful performance metrics including the probability of transmission success and the required number of hops to reach thefinal destination. Third, we refine our proposed broadcast-based routing EBDR to provide more efficient broadcasting by adjusting the transmission range of each vehicle based on its distance to the destination and the local node density. This mechanism allows better minimization of interferences and bandwidth's saving. Furthermore, an analytical model is derived to calculate thetransmission area in the case of a simplified node distribution. Finally, we are interested on data collection mechanisms as they make inter-vehicle communications more efficient and reliable and minimize the bandwidth utilization. Our technique uses Q-learning to collect data among moving vehicles in VANETs. The aim behind using the learning technique is to make the collecting operation more reactive to nodes mobility and topology changes. For the simulation part, we compare it to a non-learning version to study the effect of the learning technique. Findings show that our technique far outperforms other propositions and achieves a good trade off between delay and collection ratio. In conclusion, we believe that the different contributions presented in this Thesis will improve the efficiency of inter-vehicle communications in both dissemination and data collection directions. In addition, our mathematical contributions will enrich the literature in terms of constructing suitable models to evaluate broadcasting techniques in urban zones
14

Soua, Ahmed. "Vehicular ad hoc networks : dissemination, data collection and routing : models and algorithms." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2013. http://www.theses.fr/2013TELE0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Chaque jour, l'humanité perd des milliers de personnes sur les routes pendant qu'ils se rendaient à travailler, à étudier ou même à se distraire. Ce nombre alarmant s'accumule avec le coût financier terrifiant de ces décès: Certaines statistiques évaluent le coût à 160 milliards d'euros par an en Europe. Dans ce contexte, les réseaux véhiculaires (VANETs) émergent comme une technologie sans fil prometteuse capable d'améliorer la vision des conducteurs et ainsi offrir un horizon télématique plus vaste. Les applications de sécurité routière exigent que le message d'alerte soit propagé de proche en proche par les véhicules jusqu'à arriver à la zone concernée par l'alerte tout en respectant les délais minimaux exigés par ce type d'applications et la grande fiabilité des transmissions. Dans cette thèse, nous nous intéressons à l'amélioration de l'efficacité des communications inter-véhiculaires sous différents scénarios: tout d'abord, nous nous concentrons sur le développement d'une nouvelle solution, appelée EBDR, pour disséminer les informations d'alertes dans un réseau VANET tout en assurant des courts délais de bout en bout et une efficacité pour les transmissions. Notre proposition est basée sur des transmissions dirigées effectuées à l'aide des antennes directionnelles pour la diffusion des messages et un algorithme de guidage d'itinéraire afin de choisir le meilleur chemin pour le paquet. En dépit de son fonctionnement en diffusion, les transmissions de notre technique s'arrêtent très rapidement après l'arrivée du paquet à la destination finale ce qui représente une caractéristique fondamentale dans la conception d’EBDR. Deuxièmement, nous proposons un framework mathématique ayant pour objectif l'évaluation des performances d’EBDR analytiquement. Nos modèles analytiques permettent de dériver des métriques de performances significatives à savoir la probabilité de succès et le nombre de sauts requis pour atteindre la destination finale. En outre, nous proposons une amélioration de notre protocole EBDR dans le but de fournir une diffusion plus efficace. Pour cela, nous nous basons sur l'ajustement de la puissance de transmission de chaque véhicule en fonction de la distance qui le sépare de la destination et la densité locale des nœuds. Ce mécanisme de contrôle de congestion permet de mieux minimiser les interférences et économiser de la bande passante. En plus, un modèle mathématique a été élaboré pour calculer la surface de la zone de transmission dans le cas d'une distribution uniforme des nœuds. Finalement, nous nous sommes intéressés aux mécanismes de collecte de données dans les réseaux véhiculaires. Notre approche est basée sur l'utilisation du principe du Q-learning pour la collecte des données des véhicules en mouvement. L'objectif de l'utilisation de ce mécanisme d'apprentissage est de rendre l'opération de collecte mieux adaptée à la mobilité des nœuds et le changement rapide de la topologie du réseau. Notre technique a été comparée à des méthodes n'utilisant pas du "learning", afin d'étudier l'effet du mécanisme d'apprentissage. Les résultats ont montré que notre approche dépasse largement les autres propositions en terme de performances et réalise un bon compromis entre le taux de collecte et les délais de bout en bout. Pour conclure, nous pensons que nos différentes contributions présentées tout le long de cette thèse permettront d'améliorer l'efficacité des communications sans fil inter-véhiculaires dans les deux directions de recherches ciblées par cette thèse à savoir : la dissémination des messages et la collecte des données. En outre, nos contributions de modélisation mathématique enrichiront la littérature en termes de modèles analytiques capables d'évaluer les techniques de transmission des données dans un réseau véhiculaire
Each day, Humanity loses thousands of persons on roads when they were traveling to work, to study or even to distract. The financial cost of these injuries is also terrifying: Some statistics evaluate the financial cost of vehicle accidents at 160 billion Euro in Europe each year. These alarming figures have driven researchers, automotive companies and public governments to improve the safety of our transportation systems and communication technologies aiming at offering safer roads and smooth driving to human beings. In this context, Vehicular Adhoc Networks, where vehicles are able to communicate with each others and with existent road side units, emerge as a promising wireless technology able to enhance the vision of drivers and offer larger telematic horizon. VANETs promising applications are not only restricted to road safety but span from vehicle trafficoptimization like flow congestion control to commercial applications like file sharing and internet access. Safety applications require that their alert information is propagated to the concerned vehicles (located in the hazardous zone) with little delay and high reliability. For these reasons, this category of applications is considered as delay sensitive and broadcast-oriented nature. While classical blind flooding is rapid, its major drawback is its huge bandwidth utilization. In this thesis, we are interested on enhancing vehicular communications under different scenarios and optimizations: First, We focus on deriving a new solution (EBDR) to disseminate alert messages among moving vehicles while maintaining it efficient and rapid. Our proposal is based on directional antennas to broadcast messages and a route guidance algorithm to choose the best path for the packets. Findings confirmed the efficiency of our approach in terms of probability of success and end-to-end delays. Moreover, in spite of the broadcast nature of the proposed technique, all transmissions stop very soon after the arrival of a packet to its destination representing a strong feature in the conception of EBDR. Second, we propose a novel mathematical framework to evaluate the performance of EBDR analytically. Although most of the proposed techniques present in literature use experimental or simulation tools to defend their performance, we rely here on mathematical models to confirm our achieved results. Our proposed framework allows to derive meaningful performance metrics including the probability of transmission success and the required number of hops to reach thefinal destination. Third, we refine our proposed broadcast-based routing EBDR to provide more efficient broadcasting by adjusting the transmission range of each vehicle based on its distance to the destination and the local node density. This mechanism allows better minimization of interferences and bandwidth's saving. Furthermore, an analytical model is derived to calculate thetransmission area in the case of a simplified node distribution. Finally, we are interested on data collection mechanisms as they make inter-vehicle communications more efficient and reliable and minimize the bandwidth utilization. Our technique uses Q-learning to collect data among moving vehicles in VANETs. The aim behind using the learning technique is to make the collecting operation more reactive to nodes mobility and topology changes. For the simulation part, we compare it to a non-learning version to study the effect of the learning technique. Findings show that our technique far outperforms other propositions and achieves a good trade off between delay and collection ratio. In conclusion, we believe that the different contributions presented in this Thesis will improve the efficiency of inter-vehicle communications in both dissemination and data collection directions. In addition, our mathematical contributions will enrich the literature in terms of constructing suitable models to evaluate broadcasting techniques in urban zones
15

Robert, Remi. "Comparative study of the performance of TCP congestion control algorithms in an LTE network : A simulation approach to evaluate the performance of different TCP implementations in an LTE network." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the emergence of smartphones, data traÿc became the dominant application type in current mobile networks. This thesis deals with the comparison of the per-formance of di˙erent TCP congestion control algorithms when running in an LTE network. We developed an environment allowing for the simulation of the Linux im-plementation of di˙erent TCP variants and compared their performance in di˙erent scenarios. The results show that the loss-based variants manage to reach full link util-isation but create huge amount of delay whereas the delay-based mechanisms keep the delay under control but are not always able to fill the link.
På grund av utvecklingen av smartphones blev data den största källan av traffik i aktuella mobilnätverk. Den här uppsatsen handlar om jämförelsen av utförandet av olika TCP Congestion Control Algorithms när de används i LTE nätverk. Vi utveklade ett system som kan användas för att simulera Linux implementeringar av olika TCP versioner och jämföra deras utförande i olika situationer.
16

Corbel, Romuald. "Évolution des protocoles de transport du point de vue de l'équité." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse s’inscrit dans le cadre de la mesure de la congestion sur le réseau et sur l’évolution des protocoles de transport. Des changements sont apportés continuellement afin de répondre aux besoins des utilisateurs et des nouveaux services. La congestion est l’un des problèmes les plus critiques car elle a un impact sur la performance des réseaux Internet, d’où la nécessité pour les algorithmes de contrôle de congestion de la prévenir ou de la supprimer. Aujourd’hui, aucun algorithme ne répond parfaitement aux exigences attendues, et de nombreux travaux de recherches sont en cours. Néanmoins, ces nouveaux algorithmes peuvent affecter l’équité du réseau étant donné que le comportement du protocole de transport peut changer radicalement en fonction de l’algorithme de contrôle de congestion utilisé dans les points finaux. De plus, durant ces dernières années, les protocoles de transport ont subi des évolutions majeures. Un exemple significatif récent est celui de Quick UDP Internet Connections (QUIC), un protocole introduit par Google, qui vise à remplacer deux protocoles de transport et de sécurité largement utilisés, à savoir Transmission Control Protocol (TCP) et Transport Layer Security (TLS).QUIC est implémenté dans les applications utilisateurs (plutôt que dans le noyau du système d’exploitation). Il se veut résistant à l’ossification et donc de ce fait il est plus versatile. Ceci rend alors les fournisseurs de contenus, comme Google, hégémoniques sur le débit de ses utilisateurs. En raison du développement progressif des algorithmes de contrôle de congestion et de la nature évolutive des protocoles de transport, de nouveaux défis se posent en matière de gestion de l’équité. C’est pourquoi, dans cette thèse nous nous sommes orientés sur le développement d’une plateforme de tests pour mesurer l’équité réseau à partir du débit des différents flux. De plus, afin de caractériser l’équité telle que la perçoit un utilisateur, nous nous sommes concentrés sur la détermination d’une procédure impartiale d’évaluation de l’équité durant toute une session d’un flux de transport (nommée Session Fairness Assessment (SFA) et Weighted Session Fairness Assessment (WSFA)). A partir de ces éléments, nous avons analysé spécifiquement l’équité des protocoles lorsque les flux TCP et QUIC coexistent sur un réseau fixe et mobile. Lors de nos évaluations de l’équité, nous avons identifié l’impact des aspects de la mise en œuvre de QUIC tels que : l’émulation de connexions TCP multiples, la limitation de la taille des fenêtres de congestion et l’utilisation de l’option hybrid start (hystart). Les résultats montrent que ces mécanismes ont une forte influence sur l’équité que ce soit sur réseau fixe ou réseau mobile. En effet, un mauvais réglage des paramètres par défaut de ces mécanismes ou l’activation de l’option hystart peut affecter la performance des protocoles de transport et par conséquent l’équité. En ce qui concerne l’évaluation des algorithmes de contrôle de congestion, les résultats montrent que l’équité entre deux algorithmes différents dépend de la configuration du réseau. Cette conclusion démontre qu’une procédure de mesures, telle que celle qui a été présentée dans cette thèse, est pertinente pour réaliser l’évaluation de l’équité. Dans cette thèse nous pouvons conclure que le manque de standardisation, par exemple de l’émulation de connexions TCP multiples dans QUIC nous amène à nous interroger plus largement sur la manière dont la philosophie de conception de QUIC tient compte de l’équité. De plus, les résultats obtenus sur l’évaluation de l’équité des algorithmes de contrôle de congestion, nous permet de remettre en cause l’évaluation de l’équité de plusieurs contributions lorsqu’elle n’est pas testée dans suffisamment de configurations réseau
This thesis is in the context of measuring congestion on the network and the evolution of transport protocols. Changes are continually being made to meet the needs of users and new services. Congestion is one of the most critical issues because it has an impact on the performance of Internet networks, hence the need for congestion control algorithms to prevent or remove it. Today, no algorithm perfectly meets the expected requirements, and a lot of research is underway. Nevertheless, these new algorithms can affect network fainress since the behaviour of the transport protocol can change radically depending on the congestion control algorithm used in the endpoints. In addition, in recent years, transport protocols have undergone major changes. A recent significant exampleis Quick UDP Internet Connections (QUIC), a protocol introduced by Google, which aims to replace two widely used transport and security protocols, Transmission Control Protocol (TCP) and Transport Layer Security (TLS). QUIC is implemented in user applications (rather than in the operatingsystem kernel). It is designed to be resistant to ossification and therefore more versatile. This makes content providers, such as Google, hegemonic about the data rate of their users. Due to the progressive development of congestion control algorithms and the evolving nature of transport protocols, new challenges arise in fairness management. This is why, in this thesis, we focused on the development of a test platform to measure network fairness based on the flow rate of the different flows. In addition, in order to characterize fairness as perceived by a user, we focused on determining an impartial procedure for assessing fainress during an entire session of a transport flow (called Session Fairness Assessment(SFA) and Weighted Session Fairness Assessment(WSFA)). Based on these elements, we specifically analyzed the fairness of the protocols when TCP and QUIC flows coexist on a fixed and mobile network. In our fairness assessments, weidentified the impact of aspects of QUIC implementation such as: emulating multiple TCP connections, limiting the size of congestion windows and using the hystart option. The results show that these mechanisms have a strong influence on fairness on both fixed and mobile networks. Indeed,a wrong setting of the default parameters of these mechanisms or the activation of the hystart option can affect the performance of transport protocols and therefore fainress. With regard to the evaluation of congestion control algorithms, the results show that the fainress between two different algorithms depends on the network configuration. This conclusion demonstrates that a measurement procedure, such as the one presented in this thesis, is relevant to conducting the fairness assessment. In this thesis we can conclude that the lack of standardization, for example of emulating multiple TCP connections in QUIC, leads us to question more broadly how QUIC’s design philosophy takes fairness into account. In addition, the results obtained on the evaluation of the fainress of congestion control algorithms allow us to question the fainress evaluation of several contributions when it is not tested in enough network configurations
17

Ramachandran, Shyamal. "Link Adaptation Algorithm and Metric for IEEE Standard 802.16." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/31364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Broadband wireless access (BWA) is a promising emerging technology. In the past, most BWA systems were based on proprietary implementations. The Institute of Electrical and Electronics Engineers (IEEE) 802.16 task group recently standardized the physical (PHY) and medium-access control (MAC) layers for BWA systems. To operate in a wide range of physical channel conditions, the standard defines a robust and flexible PHY. A wide range of modulation and coding schemes are defined. While the standard provides a framework for implementing link adaptation, it does not define how exactly adaptation algorithms should be developed. This thesis develops a link adaptation algorithm for the IEEE 802.16 standard's WirelessMAN air interface. This algorithm attempts to minimize the end-to-end delay in the system by selecting the optimal PHY burst profile on the air interface. The IEEE 802.16 standard recommends measuring C/(N+I) at the receiver to initiate a change in the burst profile, based on the comparison of the instantaneous the C/(N+I) with preset C/(N+I) thresholds. This research determines the C/(N+I) thresholds for the standard specified channel Type 1. To determine the precise C/(N+I) thresholds, the end-to-end(ETE) delay performance of IEEE 802.16 is studied for different PHY burst profiles at varying signal-to-noise ratio values. Based on these performance results, we demonstrate that link layer ETE delay does not reflect the physical channel condition and is therefore not suitable for use as the criterion for the determination of the C/(N+I) thresholds. The IEEE 802.16 standard specifies that ARQ should not be implemented at the MAC layer. Our results demonstrate that this design decision renders the link layer metrics incapable of use in the link adaptation algorithm. Transmission Control Protocol (TCP) delay is identified as a suitable metric to serve as the link quality indicator. Our results show that buffering and retransmissions at the transport layer cause ETE TCP delay to rise exponentially below certain SNR values. We use TCP delay as the criterion to determine the SNR entry and exit thresholds for each of the PHY burst profiles. We present a simple link adaptation algorithm that attempts to minimize the end-to-end TCP delay based on the measured signal-to-noise ratio (SNR). The effects of Internet latency, TCP's performance enhancement features and network traffic on the adaptation algorithm are also studied. Our results show that delay in the Internet can considerably affect the C/(N+I) thresholds used in the LA algorithm. We also show that the load on the network also impacts the C/(N+I) thresholds significantly. We demonstrate that it is essential to characterize Internet delays and network load correctly, while developing the LA algorithm. We also demonstrate that TCP's performance enhancement features do not have a significant impact on TCP delays over lossy wireless links.
Master of Science
18

Afifi, Mohammed Ahmed Melegy Mohammed. "TCP FTAT (Fast Transmit Adaptive Transmission): A New End-To- End Congestion Control Algorithm." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1414689425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fares, Rasha H. A. "Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints. An investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent traffic." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have
Ministry of Higher Education in Egypt and the Egyptian Cultural Centre and Educational Bureau in London
20

Fares, Rasha Hamed Abdel Moaty. "Performance modelling and analysis of congestion control mechanisms for communication networks with quality of service constraints : an investigation into new methods of controlling congestion and mean delay in communication networks with both short range dependent and long range dependent traffic." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/5435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Active Queue Management (AQM) schemes are used for ensuring the Quality of Service (QoS) in telecommunication networks. However, they are sensitive to parameter settings and have weaknesses in detecting and controlling congestion under dynamically changing network situations. Another drawback for the AQM algorithms is that they have been applied only on the Markovian models which are considered as Short Range Dependent (SRD) traffic models. However, traffic measurements from communication networks have shown that network traffic can exhibit self-similar as well as Long Range Dependent (LRD) properties. Therefore, it is important to design new algorithms not only to control congestion but also to have the ability to predict the onset of congestion within a network. An aim of this research is to devise some new congestion control methods for communication networks that make use of various traffic characteristics, such as LRD, which has not previously been employed in congestion control methods currently used in the Internet. A queueing model with a number of ON/OFF sources has been used and this incorporates a novel congestion prediction algorithm for AQM. The simulation results have shown that applying the algorithm can provide better performance than an equivalent system without the prediction. Modifying the algorithm by the inclusion of a sliding window mechanism has been shown to further improve the performance in terms of controlling the total number of packets within the system and improving the throughput. Also considered is the important problem of maintaining QoS constraints, such as mean delay, which is crucially important in providing satisfactory transmission of real-time services over multi-service networks like the Internet and which were not originally designed for this purpose. An algorithm has been developed to provide a control strategy that operates on a buffer which incorporates a moveable threshold. The algorithm has been developed to control the mean delay by dynamically adjusting the threshold, which, in turn, controls the effective arrival rate by randomly dropping packets. This work has been carried out using a mixture of computer simulation and analytical modelling. The performance of the new methods that have.
21

Gliksberg, John. "New routing algorithms for heterogeneous exaflopic supercomputers." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La construction de supercalculateurs performants nécessite d'optimiser les communications, et leur échelle exaflopique amène un risque inévitable de pannes relativement fréquentes.Pour un cluster avec un réseau et des équipements donnés, on améliore les performances en s'assurant que l'on sélectionne une bonne route pour chaque message tout en minimisant les conflits d'accès aux resources entre messages.Cette thèse se concentre sur la famille des réseaux fat-trees, pour laquelle nous donnons quelques grandes caractéristiques afin de mieux prendre en compte une classe réaliste de cette topologie, tout en conservant un avantage par rapport aux méthodes agnostiques.De plus, une approche d'évaluation statique partiellement nouvelle du risque de congestion est utilisée pour comparer les algorithmes.Une optimisation générique est présentée pour certaines applications sur des clusters avec des équipements hétérogènes.Les algorithmes proposés forment le résultat de plusieurs approches distinctes pour apporter des contributions dans le domaine du routage statique centralisé, en combinant rapidité de calcul, résilience aux pannes, et minimisation du risque de congestion
Building efficient supercomputers requires optimising communications, and their exaflopic scale causes an unavoidable risk of relatively frequent failures.For a cluster with given networking capabilities and applications, performance is achieved by providing a good route for every message while minimising resource access conflicts between messages.This thesis focuses on the fat-tree family of networks, for which we define several overarching properties so as to efficiently take into account a realistic superset of this topology, while keeping a significant edge over agnostic methods.Additionally, a partially novel static congestion risk evaluation method is used to compare algorithms.A generic optimisation is presented for some applications on clusters with heterogeneous equipment.The proposed algorithms use distinct approaches to improve centralised static routing by combining computation speed, fault-resilience, and minimal congestion risk
22

Jourjon, Guillaume Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Towards a versatile transport protocol." Awarded by:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/41480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents three main contributions that aim to improve the transport layer of the current networking architecture. The transport layer is nowadays dominated by the use of TCP and its congestion control. Recently new congestion control mechanisms have been proposed. Among them, TCP Friendly Hate Control (TFRC) appears to be one of the most complete. Nevertheless this congestion control mechanism, as with TCP, does not take into account either the evolution of the network in terms of Quality of Service and mobility or the evolution of the applications. The first contribution of this thesis is a specialisation of TFRC congestion control to provide a QoS-aware Transport Protocol specifically designed to operate over QoS-enabled networks with bandwidth guarantee mechanisms. This protocol combines a QoS-aware congestion control, which takes into account network-level bandwidth reservations, with full ordered reliability mechanism to provide a transport service similar to TCP. As a result, we obtain the guaranteed throughput at the application level where TCP fails. This protocol is t he first transport protocol compliant with bandwidth guaranteed networks. At the same time the set of network services expands, new technologies have been proposed and deployed at the physical layer. These new technologies are mainly characterised by communications done without wire constraint and the mobility of the end-systems. Furthermore, these technologies are usually deployed on entities where the CPU power and memory storage are limited. The second contribution of this thesis is therefore to propose an adaptation of TFHC to these entities. This is accomplished with the proposition of a new sender-based version of TFHC. This version has been implemented, evaluated and its numerous contributions and advantages compare to usual TFHC version have been demonstrated. Finally, we proposed an optimisation of actual implementations of TFHC. This optimisation first consists in the proposition of an algorithm based on a numerical analysis of the equation used in TFHC and the use of the Newton's algorithm. We furthermore give a first step, with the introduction of a new framework for TFRC, in order to better understand TFHC behaviour and to optimise the computation of the packet loss rate according to loss probability distributions.
23

Prabhu, Balakrishna J. "Chaînes de Markov et processus de décision markoviens pour le contrôle de congestion et de puissance." Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00328111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse contient quelques applications des chaînes de Markov et des processus de décision markoviens pour la contrôle de congestion et de puissance. D´abord nous étudions le comportement de la taille de la fenêtre d´une source qui utilise l´algorithme MIMD. Nous montrons que le logarithme de la taille de la fenêtre suit une récurrence stochastique additive, et est une chaîne de Markov. Nous montrons aussi que le débit obtenu par une source est proportionnel à l´inverse de la probabilité de perte d´un paquet. Ensuite, nous analysons le processus de la taille de la fenêtre d´un algorithme de contrôle de congestion en temps continu. Nous pourvoyons des conditions sous lesquelles deux algorithmes ont le même comportement. Puis, nous étudions le processus de rapport de deux sources qui utilisent l´algorithme MIMD et qui partagent la capacité d´un goulot d´étranglement. Pour les sources hétérogènes, nous montrons que l´intensité du processus de perte de paquet doit être supérieure à une constante qui dépend des paramètres des algorithmes pour que l´indice d´équité s´améliore. Ensuite, nous présentons un modèle stochastique pour obtenir la distribution jointe du nombre instantané de paquets et sa moyenne mobile. Ensuite, nous étudions un problème de commande optimale en temps discret. Un appareil mobile veut transmettre des paquets et conserver son énergie en même temps. Nous montrons que la politique optimale est un contrôle à seuil. Enfin, par simulations, nous étudions le délai des flots TCP sur la voie descendante de l´UMTS lorsque deux politiques différentes de commutation de canaux sont utilisées.
24

Almejalli, Khaled A. "Intelligent Real-Time Decision Support Systems for Road Traffic Management. Multi-agent based Fuzzy Neural Networks with a GA learning approach in managing control actions of road traffic centres." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The selection of the most appropriate traffic control actions to solve non-recurrent traffic congestion is a complex task which requires significant expert knowledge and experience. In this thesis we develop and investigate the application of an intelligent traffic control decision support system for road traffic management to assist the human operator to identify the most suitable control actions in order to deal with non-recurrent and non-predictable traffic congestion in a real-time situation. Our intelligent system employs a Fuzzy Neural Networks (FNN) Tool that combines the capabilities of fuzzy reasoning in measuring imprecise and dynamic factors and the capabilities of neural networks in terms of learning processes. In this work we present an effective learning approach with regard to the FNN-Tool, which consists of three stages: initializing the membership functions of both input and output variables by determining their centres and widths using self-organizing algorithms; employing an evolutionary Genetic Algorithm (GA) based learning method to identify the fuzzy rules; tune the derived structure and parameters using the back-propagation learning algorithm. We evaluate experimentally the performance and the prediction capability of this three-stage learning approach using well-known benchmark examples. Experimental results demonstrate the ability of the learning approach to identify all relevant fuzzy rules from the training data. A comparative analysis shows that the proposed learning approach has a higher degree of predictive capability than existing models. We also address the scalability issue of our intelligent traffic control decision support system by using a multi-agent based approach. The large network is divided into sub-networks, each of which has its own associated agent. Finally, our intelligent traffic control decision support system is applied to a number of road traffic case studies using the traffic network in Riyadh, in Saudi Arabia. The results obtained are promising and show that our intelligent traffic control decision support system can provide an effective support for real-time traffic control.
25

Girardet, Brunilde. "Trafic aérien : détermination optimale et globale des trajectoires d'avion en présence de vent." Thesis, Toulouse, INSA, 2014. http://www.theses.fr/2014ISAT0027/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans le contexte du futur système de gestion du trafic aérien, un des objectifs consiste à réduire l’impact environnemental du trafic aérien. Pour respecter ce but, le concept de “free-route”, introduit dans les années 1990, semble bien adapté aujourd’hui. Les avions ne seraient plus contraints à voler le long de routes aériennes, mais pourraient suivre des trajectoires optimales en terme de consommation. L’objectif de cette thèse est d’introduire une nouvelle méthode de planification du trafic à l’horizon pré-tactique avec des objectifs quelques fois contradictoires, c’est-à-dire avec pour but de minimiser la consommation ou de façon équivalente la durée de trajet en tenant compte des conditions météorologiques et de minimiser l’encombrement de l’espace aérien.La méthode a été mise au point en deux étapes. La première étape a été consacrée au calcul d’une seule trajectoire optimale en terme de temps de vol en tenant compte du vent et de contraintes celles des zones interdites de survol. Cette optimisation est basée sur une adaptation de l’algorithme Ordered Upwind. La deuxième étape introduit un algorithme hybride développé, basé sur un algorithme de recuit simulé et sur l’algorithme déterministe développé dans la première étape, afin de minimiser un compromis entre la congestion et la consommation. L’algorithme combine ainsi la capacité d’atteindre la solution optimale globale via une recherche locale qui permet d’accélérer la convergence.Des simulations numériques avec des prévisions de vent sur du trafic européen donnent des résultats encourageants qui démontrent que la méthode globale est à la fois viable et bénéfique en terme du temps de vol total comme de la congestion globale donc de la diminution des conflits
In the context of the future Air Traffic Management system (ATM), one objective is to reduce the environmental impact of air traffic. With respect to this criterion, the “freeroute” concept, introduced in the mid 1990’s, is well suited to improve over nowadays airspace based ATM. Aircraft will no longer be restricted to fly along airways and may fly along fuel-optimal routes. The objective of this thesis is to introduce a novel pretactical trajectory planning methodology which aims at minimizing airspace congestion while taking into account weather conditions so as to minimize also fuel consumption.The development of the method was divided in two steps. The first step is dedicated to compute a time-optimal route for one aircraft taking into account wind conditions. This optimization is based on an adaptation of the Ordered Upwind Method on the sphere.The second step introduces a hybrid algorithm, based on simulated annealing and on the deterministic algorithm developed in the first step, in order to minimize congestion. Thus the algorithm combines the ability to reach a globally-optimal solution with a local-search procedure that speeds up the convergence
26

Sun, Bin, and Wipawat Uppatumwichian. "A Study of Factors Which Influence QoD of HTTP Video Streaming Based on Adobe Flash Technology." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, there has been a significant rise in the Hyper-Text Transfer Protocol (HTTP) video streaming usage worldwide. However, the knowledge of performance of HTTP video streaming is still limited, especially in the aspect of factors which affect video quality. The reason is that HTTP video streaming has different characteristics from other video streaming systems. In this thesis, we show how the delivered quality of a Flash video playback is affected by different factors from diverse layers of the video delivery system, including congestion control algorithm, delay variation, playout buffer length, video bitrate and so on. We introduce Quality of Delivery Degradation (QoDD) then we use it to measure how much the Quality of Delivery (QoD) is degraded in terms of QoDD. The study is processed in a dedicated controlled environment, where we could alter the influential factors and then measure what is happening. After that, we use statistic method to analyze the data and find the relationships between influential factors and quality of video delivery which are expressed by mathematic models. The results show that the status and choices of factors have a significant impact on the QoD. By proper control of the factors, the quality of delivery could be improved. The improvements are approximately 24% by TCP memory size, 63% by congestion control algorithm, 30% by delay variation, 97% by delay when considering delay variation, 5% by loss and 92% by video bitrate.
27

Jourjon, Guillaume. "Toward a versatile transport protocol." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2008. http://tel.archives-ouvertes.fr/tel-00309959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux présentés dans cette thèse ont pour but d'améliorer la couche transport de l'architecture réseau de l'OSI. La couche transport est de nos jour dominée par l'utilisation de TCP et son contrôle de congestion. Récemment de nouveaux mécanismes de contrôle de congestion ont été proposés. Parmi eux TCP Friendly Rate Control (TFRC) semble être le plus abouti. Cependant, tout comme TCP, ce mécanisme ne prend pas en compte ni les évolutions du réseau ni les nouveaux besoins des applications. La première contribution de cette thèse consiste en une spécialisation de TFRC afin d'obtenir un protocole de transport avisé de la Qualité de Service (QdS) spécialement défini pour des réseaux à QdS offrant une garantie de bande passante. Ce protocole combine un mécanisme de contrôle de congestion orienté QdS qui prend en compte la réservation de bande passante au niveau réseau, avec un service de fiabilité totale afin de proposer un service similaire à TCP. Le résultat de cette composition constitue le premier protocole de transport adapté à des réseau à garantie de bande passante. En même temps que cette expansion de service au niveau réseau, de nouvelles technologies ont été proposées et déployées au niveau physique. Ces nouvelles technologies sont caractérisées par leur affranchissement de support filaire et la mobilité des systèmes terminaux. De plus, elles sont généralement déployées sur des entités où la puissance de calcul et la disponibilité mémoire sont inférieures à celles des ordinateurs personnels. La deuxième contribution de cette thèse est la proposition d'une adaptation de TFRC à ces entités via la proposition d'une version allégée du récepteur. Cette version a été implémentée, évaluée quantitativement et ses nombreux avantages et contributions ont été démontrés par rapport à TFRC. Enfin, nous proposons une optimisation des implémentations actuelles de TFRC. Cette optimisation propose tout d'abord un nouvel algorithme pour l'initialisation du récepteur basé s ur l'utilisation de l'algorithme de Newton. Nous proposons aussi l'introduction d'un outil nous permettant d'étudier plus en détails la manière dont est calculé le taux de perte du côté récepteur.
28

Richard, Olivier. "Régulation court terme du trafic aérien et optimisation combinatoire : application de la méthode de génération de colonnes." Phd thesis, Grenoble INPG, 2007. http://tel.archives-ouvertes.fr/tel-00580414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Ce travail a pour objet la résolution d'un problème combinatoire posé dans le cadre de la régulation court terme (ou dynamique) du trafic aérien. On cherche à déterminer pour chaque vol régulable une trajectoire en 4 dimensions réalisable de manière à respecter les contraintes de capacité des secteurs tout en minimisant la somme des coûts des trajectoires choisies. Le problème est modélisé par un programme linéaire mixte. Une représentation ad hoc du système aérien sert de support à la modélisation fine des trajectoires. Un processus global de résolution basé sur la génération de colonnes couplée à la technique de branch-and-bound est détaillé. Les colonnes du problème représentant des trajectoires, la génération de colonnes par le sous problème de tarification se traduit par la recherche de chemins tridimensionnels sur un réseau continu et dynamique. Un algorithme spécifique basé sur les algorithmes de plus court chemin par marquage et sur la programmation dynamique est développé et testé. Toute la méthode est évaluée sur des instances réelles représentant l'espace aérien géré par la CFMU, l'organisme européen de gestion des flux de trafic aérien. Les résultats obtenus en un temps de calcul compatible avec le contexte opérationnel valident finalement la méthode
29

Yao, Chang-Li, and 姚長利. "Performance Comparison of TCP Congestion Control Algorithms." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/32340586056874661133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立成功大學
資訊工程研究所
88
In this thesis, we present the evolution of TCP congestion In In this thesis, we present the evolution of TCP congestion control algorithms and discuss the key ideas of each algorithm. The motivations of these modifications are also addressed. According to the strategy of handling the packet loss, TCP algorithms are classified into loss-recovery and loss-avoidance methods. Reno, which is widely used nowadays, and Vegas are simulated first and the unfairness problem for Reno and Vegas coexistence reveals. This problem prevents users from using Vegas which has better performance than Reno. RED router is discussed to release this problem. Some other simulations are conducted to compare the performance of the TCP algorithms after Reno. We state the ranking positions of the algorithms according to the simulation results. Pseudo-Rate, one of loss-avoidance algorithms, achieves the best performance among all the TCP algorithms.
30

Abrantes, Filipe Lameiro. "Explicit congestion control algorithms for time-varying capacity media." Doctoral thesis, 2008. http://hdl.handle.net/10216/58447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Araújo, João Taveira. "Implementation and performance evaluation of explicit congestion control algorithms." Master's thesis, 2008. http://hdl.handle.net/10216/58622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Estágio realizado no INESC-Porto e orientado pelo Eng.º Filipe Lameiro Abrantes
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 2008
32

Araújo, João Taveira. "Implementation and performance evaluation of explicit congestion control algorithms." Dissertação, 2008. http://hdl.handle.net/10216/58622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Estágio realizado no INESC-Porto e orientado pelo Eng.º Filipe Lameiro Abrantes
Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores. Faculdade de Engenharia. Universidade do Porto. 2008
33

Abrantes, Filipe Lameiro. "Explicit congestion control algorithms for time-varying capacity media." Tese, 2008. http://hdl.handle.net/10216/58447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kayali, Mahmoud. "Interoperability among rate-based congestion control algorithms for ABR service in ATM networks." Thesis, 1997. http://hdl.handle.net/2429/6406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The ATM Forum has recently adopted rate-based schemes as the standard congestion control mechanisms for Available Bit Rate (ABR) service. Two types of rate-based schemes are supported by the ATM Forum specification, namely binary feedback and explicit rate. Explicit rate feedback schemes by themselves can be divided into exact and approximate. Since the implementation of one of these schemes in ATM switches is left to switch manufacturers, it is expected to have these types working on the same network. The compatibility of switches each running a different scheme was considered in the ATM Forum specification. However, since each type of switches has its own merits and demerits, the interoperability of these switch types and its impact on the performance of the network deserves extensive studies. This thesis investigates two types of interoperability in multi-vendor (heterogeneous) ATM networks: the interoperability between binary feedback and explicit rate schemes and the interoperability among different explicit rate schemes. For the first type of interoperability, we look at the steady-state performance of ABR service in networks consisting of both binary feedback and explicit rate switches. In order to find the best locations for each switch type, several cases of switch type locations are considered. Moreover, the impact of ABR sources parameter setting on the performance of heterogeneous networks is studied. For the second type of interoperability (i.e. the interoperability among explicit rate schemes), we investigate potential unfairness problems resulting from situations whereby different ABR sources receive network feedback from different subsets of switches along a given network path. Three types of unfairness problems that arise in such networks are identified. One type of unfairness appears while the sources are increasing their rates and the second type appears while they are decreasing their rates. The third type is a new cause of unfairness generated by the presence of highly bursty Variable Bit Rate (VBR) traffic which can cause unfairness not only in the steady-state periods but also in the transient-state periods on a network link. In addition to identifying the causes of unfairness, our results provide quantitative evaluation of the level of unfairness as a function of various network and source parameters.
35

Jagannathan, S. Ravi. "Black box modelling of congestion control protocols for computer networks." Thesis, 2009. http://handle.uws.edu.au:8081/1959.7/531210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A number of putative concepts, terminology and techniques, as pertinent to numerous schools of thought, are presented, investigated and critiqued. Going forward, we narrow our focus of consideration to some basic issues and trends in the management of Internet congestion control, as well as many (now) traditional attempts to address these problems. Key formulations are laid out, which set up the problem at hand, and we raise many more fundamental questions. Key trends observable in the literature are discussed. This provides a relatively smooth introduction to the subject. Reference is then made to Sierra, a novel “Black Box” congestion control algorithm/protocol, which itself is the subject of serious ongoing refinement, having already been baselined in five research papers on the subject. The “Black Box” terminology was in essence conceived many years ago by van Jacobson, and is revived in this thesis. A framework for the comparative, stochastic (theoretical) analysis of various congestion control algorithms/protocols is taken up and investigated. From a theoretical, quantitative perspective, it is shown that Sierra offers relatively superior throughput related performance levels. Finally, we take up the matter of comparative simulation of Sierra, vis-à-vis its “competitors”. For this project, the popular network simulator (tool) OPNET is taken up and deployed. We present (here) and analyze the results from numerous simulation experiments. The outcome is that Sierra enhances throughput, as against other more traditional Black Box algorithms (Vegas, Reno, New Reno, etc.).
36

Sahasrabudhe, Nachiket S. "Joint Congestion Control, Routing And Distributed Link Scheduling In Power Constrained Wireless Mesh Networks." Thesis, 2008. https://etd.iisc.ac.in/handle/2005/798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh networks, where the nodes in the network are subjected to energy expenditure rate constraints. As wireless scenario does not allow all the links to be active all the time, only a subset of given links can be active simultaneously. We model the inter-link interference using the link contention graph. All the nodes in the network are power-constrained and we model this constraint using energy expenditure rate matrix. Then we formulate the problem as a network utility maximization (NUM) problem. We notice that this is a convex optimization problem with affine constraints. We apply duality theory and decompose the problem into two sub-problems namely, network layer congestion control and routing problem, and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. The optimal scheduler selects that set of non-interfering links, for which the sum of link prices is maximum. We study the effects of energy expenditure rate constraints of the nodes on the maximum possible network utility. It turns out that the dominant of the two constraints namely, the link capacity constraint and the node energy expenditure rate constraint affects the network utility most. Also we notice the fact that the energy expenditure rate constraints do not affect the nature of optimal link scheduling problem. Following this fact, we study the problem of distributed link scheduling. Optimal scheduling requires selecting independent set of maximum aggregate price, but this problem is known to be NP-hard. We first show that as long as scheduling policy selects the set of non-interfering links, it can not go unboundedly away from the optimal solution of network utility maximization problem. Then we proceed and evaluate a simple greedy scheduling algorithm. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
37

Sahasrabudhe, Nachiket S. "Joint Congestion Control, Routing And Distributed Link Scheduling In Power Constrained Wireless Mesh Networks." Thesis, 2008. http://hdl.handle.net/2005/798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We study the problem of joint congestion control, routing and MAC layer scheduling in multi-hop wireless mesh networks, where the nodes in the network are subjected to energy expenditure rate constraints. As wireless scenario does not allow all the links to be active all the time, only a subset of given links can be active simultaneously. We model the inter-link interference using the link contention graph. All the nodes in the network are power-constrained and we model this constraint using energy expenditure rate matrix. Then we formulate the problem as a network utility maximization (NUM) problem. We notice that this is a convex optimization problem with affine constraints. We apply duality theory and decompose the problem into two sub-problems namely, network layer congestion control and routing problem, and MAC layer scheduling problem. The source adjusts its rate based on the cost of the least cost path to the destination where the cost of the path includes not only the prices of the links in it but also the prices associated with the nodes on the path. The MAC layer scheduling of the links is carried out based on the prices of the links. The optimal scheduler selects that set of non-interfering links, for which the sum of link prices is maximum. We study the effects of energy expenditure rate constraints of the nodes on the maximum possible network utility. It turns out that the dominant of the two constraints namely, the link capacity constraint and the node energy expenditure rate constraint affects the network utility most. Also we notice the fact that the energy expenditure rate constraints do not affect the nature of optimal link scheduling problem. Following this fact, we study the problem of distributed link scheduling. Optimal scheduling requires selecting independent set of maximum aggregate price, but this problem is known to be NP-hard. We first show that as long as scheduling policy selects the set of non-interfering links, it can not go unboundedly away from the optimal solution of network utility maximization problem. Then we proceed and evaluate a simple greedy scheduling algorithm. Analytical bounds on performance are provided and simulations indicate that the greedy heuristic performs well in practice.
38

Yesuratnam, G. "Development Of Algorithms For Security Oriented Power System Operation." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of an Energy Control Center (ECC) is to ensure secure and economic operation of power system. The challenge to optimize power system operation, while maintaining system security and quality of power supply to customers, is increasing. Growing demand without matching expansion of generation and transmission facilities and more tightly interconnected power systems contribute to the increased complexity of system operation. Rising costs due to inflation and increased environmental concerns has made transmission, as well as generation systems to be operated closure to design limits, with smaller safety margins and hence greater exposure to unsatisfactory operating conditions following a disturbance. Investigations of recent blackouts indicate that the root cause of most of these major power system disturbances is voltage collapse. Information gathered and preliminary analysis, from the most recent blackout incident in North America on 14th August 2003, is pointing the finger on voltage instability due to some unexpected contingency. In this incident, reports indicate that approximately 50 million people were affected interruption from continuous supply for more than 15 hours. Most of the incidents are related to heavily stressed system where large amounts of real and reactive power are transported over long transmission lines while appropriate real and reactive power resources are not available to maintain normal system conditions. Hence, the problem of voltage stability and voltage collapse has become a major concern in power system planning and operation. Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. In the last few decades, the problem of reactive power control for improving economy and security of power system operation has received much attention. Generally, the load bus voltages can be maintained within their permissible limits by reallocating reactive power generations in the system. This can be achieved by adjusting transformer taps, generator voltages, and switchable Ar sources. In addition, the system losses can be minimized via redistribution of reactive power in the system. Therefore, the problem of the reactive power dispatch can be optimized to improve the voltage profile and minimize the system losses as well. The Instability in power system could be relieved or at least minimized with the help of most recent developed devices called Flexible AC Transmission System (FACTS) controllers. The use of Flexible AC Transmission System (FACTS) controllers in power transmission system have led to many applications of these controllers not only to improve the stability of the existing power network resources but also provide operating flexibility to the power system. In the past, transmission systems were owned by regulated, vertically integrated utility companies. They have been designed and operated so that conditions in close proximity to security boundaries are not frequently encountered. However, in the new open access environment, operating conditions tend to be much closer to security boundaries, as transmission use is increasing in sudden and unpredictable directions. Transmission unbundling, coupled with other regulatory requirements, has made new transmission facility construction more difficult. In fact, there are numerous technical challenges emerging from the new market structure. There is an acute need for research work in the new market structure, especially in the areas of voltage security, reactive power support and congestion management. In the last few decades more attention was paid to optimal reactive power dispatch. Since the problem of reactive power optimization is non-linear in nature, nonlinear programming methods have been used to solve it. These methods work quite well for small power systems but may develop convergence problems as system size increases. Linear programming techniques with iterative schemes are certainly the most promising tools for solving these types of problems. The thesis presents efficient algorithms with different objectives for reactive power optimization. The approach adopted is an iterative scheme with successive power-flow analysis using decoupled technique, formulation and solution of the linear-programmingproblem with only upper-bound limits on the state variables. Further the thesispresents critical analysis of the three following objectives, Viz., •Minimization of the sum of the squares of the voltage deviations (Vdesired) •Minimization of sum of the squares of the voltage stability L indices (Vstability) •Minimization of real power losses (Ploss) Voltage stability problems normally occur in heavily stressed systems. While the disturbance leading to voltage collapse may be initiated by a variety of causes, the underlying problem is an inherent weakness in the power system. The factors contributing to voltage collapse are the generator reactive power /voltage control limits, load characteristics, characteristics of reactive compensation devices, and the action of the voltage control devices such as transformer On Load Tap Changers (OLTCs). Power system experiences abnormal operating conditions following a disturbance, and subsequently a reduction in the EHV level voltages at load centers will be reflected on the distribution system. The OLTCs of distribution transformers would restore distribution voltages. With each tap change operation, the MW and MVAR loading on the EHV lines would increase, thereby causing great voltage drops in EHV levels and increasing the losses. As a result, with each tap changing operation, the reactive output of generators throughout the system would increase gradually and the generators may hit their reactive power capability limits, causing voltage instability problems. Thus, the operation of certain OLTCs has a significant influence on voltage instability under some operating conditions. These transformers can be made manual to avoid possible voltage instability due to their operation during heavy load conditions. Tap blocking, based on local measurement of high voltage side of load tap changers, is a common practice of power utilities to prevent voltage collapse. The great advantage of this method is that it can be easily implemented, but does not guarantee voltage stability. So a proper approach for identification of critical OLTC s based on voltage stability criteria is essential to guide the operator in ECC, which has been proposed in this thesis. It discusses the effect of OLTCs with different objectives of reactive power dispatch and proposes a technique to identify critical OLTCs based on voltage stability criteria. The fast development of power electronics based on new and powerful semiconductor devices has led to innovative technologies, such as High Voltage DC transmission (HVDC) and Flexible AC Transmission System (FACTS), which can be applied in transmission and distribution systems. The technical and economicalBenefits of these technologies represent an alternative to the application in AC systems. Deregulation in the power industry and opening of the market for delivery of cheaper energy to the customers is creating additional requirements for the operation of power systems. HVDC and FACTS offer major advantages in meeting these requirements. .A method for co-ordinated optimum allocation of reactive power in AC/DC power systems by including FACTS controller UPFC, with an objective of minimization of the sum of the squares of the voltage deviations of all the load buses has been proposed in this thesis. The study results show that under contingency conditions, the presence of FACTS controllers has considerable impact on over all system voltage stability and also on power loss minimization.minimization of the sum of the squares of the voltage deviations of all the load buses has been proposed in this thesis. The study results show that under contingency conditions, the presence of FACTS controllers has considerable impact on over all system voltage stability and also on power loss minimization. As power systems grow in their size and interconnections, their complexity increases. For secure operation and control of power systems under normal and contingency conditions, it is essential to provide solutions in real time to the operator in ECC. For real time control of power systems, the conventional algorithmic software available in ECC are found to be inadequate as they are computationally very intensive and not organized to guide the operator during contingency conditions. Artificial Intelligence (AI) techniques such as, Expert systems, Neural Networks, Fuzzy systems are emerging decision support system tools which give fast, though approximate, but acceptable right solutions in real time as they mostly use symbolic processing with a minimum number of numeric computations. The solution thus obtained can be used as a guide by the operator in ECC for power system control. Optimum real and reactive power dispatch play an important role in the day-to-day operation of power systems. Existing conventional Optimal Power Flow (OPF) methods use all of the controls in solving the optimization problem. The operators can not move so many control devices within a reasonable time. In this context an algorithm using fuzzy-expert approach has been proposed in this thesis to curtail the number of control actions, in order to realize real time objectives in voltage/reactive power control. The technique is formulated using membership functions of linguistic variables such as voltage deviations at all the load buses and the voltage deviation sensitivity to control variables. Voltage deviations and controlling variables are translated into fuzzy set notations to formulate the relation between voltage deviations and controlling ability of controlling devices. Control variables considered are switchable VAR compensators, OLTC transformers and generator excitations. A fuzzy rule based system is formed to select the critical controllers, their movement direction and step size. Results show that the proposed approach is effective for improving voltage security to acceptable levels with fewer numbers of controllers. So, under emergency conditions the operator need not move all the controllers to different settings and the solution obtained is fast with significant speedups. Hence, the proposed method has the potential to be integrated for on-line implementation in energy management systems to achieve the goals of secure power system operation. In a deregulated electricity market, it may not be always possible to dispatch all of the contracted power transactions due to congestion of the transmission corridors. System operators try to manage congestion, which otherwise increases the cost of the electricity and also threatens the system security and stability. An approach for alleviation of network over loads in the day-to-day operation of power systems under deregulated environment is presented in this thesis. The control used for overload alleviation is real power generation rescheduling based on Relative Electrical Distance (RED) concept. The method estimates the relative location of load nodes with respect to the generator nodes. The contribution of each generator for a particular over loaded line is first identified , then based on RED concept the desired proportions of generations for the desired overload relieving is obtained, so that the system will have minimum transmission losses and more stability margins with respect to voltage profiles, bus angles and better transmission tariff. The results obtained reveal that the proposed method is not only effective for overload relieving but also reduces the system power loss and improves the voltage stability margin. The presented concepts are better suited for finding the utilization of resources generation/load and network by various players involved in the day-to-day operation of the system under normal and contingency conditions. This will help in finding the contribution by various players involved in the congestion management and the deviations can be used for proper tariff purposes. Suitable computer programs have been developed based on the algorithms presented in various chapters and thoroughly tested. Studies have been carried out on various equivalent systems of practical real life Indian power networks and also on some standard IEEE systems under simulated conditions. Results obtained on a modified IEEE 30 bus system, IEEE 39 bus New England system and four Indian power networks of EHV 24 bus real life equivalent power network, an equivalent of 36 bus EHV Indian western grid, Uttar Pradesh 96 bus AC/DC system and 205 Bus real life interconnected grid system of Indian southern region are presented for illustration purposes.
39

Yesuratnam, G. "Development Of Algorithms For Security Oriented Power System Operation." Thesis, 2007. http://hdl.handle.net/2005/573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The objective of an Energy Control Center (ECC) is to ensure secure and economic operation of power system. The challenge to optimize power system operation, while maintaining system security and quality of power supply to customers, is increasing. Growing demand without matching expansion of generation and transmission facilities and more tightly interconnected power systems contribute to the increased complexity of system operation. Rising costs due to inflation and increased environmental concerns has made transmission, as well as generation systems to be operated closure to design limits, with smaller safety margins and hence greater exposure to unsatisfactory operating conditions following a disturbance. Investigations of recent blackouts indicate that the root cause of most of these major power system disturbances is voltage collapse. Information gathered and preliminary analysis, from the most recent blackout incident in North America on 14th August 2003, is pointing the finger on voltage instability due to some unexpected contingency. In this incident, reports indicate that approximately 50 million people were affected interruption from continuous supply for more than 15 hours. Most of the incidents are related to heavily stressed system where large amounts of real and reactive power are transported over long transmission lines while appropriate real and reactive power resources are not available to maintain normal system conditions. Hence, the problem of voltage stability and voltage collapse has become a major concern in power system planning and operation. Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. In the last few decades, the problem of reactive power control for improving economy and security of power system operation has received much attention. Generally, the load bus voltages can be maintained within their permissible limits by reallocating reactive power generations in the system. This can be achieved by adjusting transformer taps, generator voltages, and switchable Ar sources. In addition, the system losses can be minimized via redistribution of reactive power in the system. Therefore, the problem of the reactive power dispatch can be optimized to improve the voltage profile and minimize the system losses as well. The Instability in power system could be relieved or at least minimized with the help of most recent developed devices called Flexible AC Transmission System (FACTS) controllers. The use of Flexible AC Transmission System (FACTS) controllers in power transmission system have led to many applications of these controllers not only to improve the stability of the existing power network resources but also provide operating flexibility to the power system. In the past, transmission systems were owned by regulated, vertically integrated utility companies. They have been designed and operated so that conditions in close proximity to security boundaries are not frequently encountered. However, in the new open access environment, operating conditions tend to be much closer to security boundaries, as transmission use is increasing in sudden and unpredictable directions. Transmission unbundling, coupled with other regulatory requirements, has made new transmission facility construction more difficult. In fact, there are numerous technical challenges emerging from the new market structure. There is an acute need for research work in the new market structure, especially in the areas of voltage security, reactive power support and congestion management. In the last few decades more attention was paid to optimal reactive power dispatch. Since the problem of reactive power optimization is non-linear in nature, nonlinear programming methods have been used to solve it. These methods work quite well for small power systems but may develop convergence problems as system size increases. Linear programming techniques with iterative schemes are certainly the most promising tools for solving these types of problems. The thesis presents efficient algorithms with different objectives for reactive power optimization. The approach adopted is an iterative scheme with successive power-flow analysis using decoupled technique, formulation and solution of the linear-programmingproblem with only upper-bound limits on the state variables. Further the thesispresents critical analysis of the three following objectives, Viz., •Minimization of the sum of the squares of the voltage deviations (Vdesired) •Minimization of sum of the squares of the voltage stability L indices (Vstability) •Minimization of real power losses (Ploss) Voltage stability problems normally occur in heavily stressed systems. While the disturbance leading to voltage collapse may be initiated by a variety of causes, the underlying problem is an inherent weakness in the power system. The factors contributing to voltage collapse are the generator reactive power /voltage control limits, load characteristics, characteristics of reactive compensation devices, and the action of the voltage control devices such as transformer On Load Tap Changers (OLTCs). Power system experiences abnormal operating conditions following a disturbance, and subsequently a reduction in the EHV level voltages at load centers will be reflected on the distribution system. The OLTCs of distribution transformers would restore distribution voltages. With each tap change operation, the MW and MVAR loading on the EHV lines would increase, thereby causing great voltage drops in EHV levels and increasing the losses. As a result, with each tap changing operation, the reactive output of generators throughout the system would increase gradually and the generators may hit their reactive power capability limits, causing voltage instability problems. Thus, the operation of certain OLTCs has a significant influence on voltage instability under some operating conditions. These transformers can be made manual to avoid possible voltage instability due to their operation during heavy load conditions. Tap blocking, based on local measurement of high voltage side of load tap changers, is a common practice of power utilities to prevent voltage collapse. The great advantage of this method is that it can be easily implemented, but does not guarantee voltage stability. So a proper approach for identification of critical OLTC s based on voltage stability criteria is essential to guide the operator in ECC, which has been proposed in this thesis. It discusses the effect of OLTCs with different objectives of reactive power dispatch and proposes a technique to identify critical OLTCs based on voltage stability criteria. The fast development of power electronics based on new and powerful semiconductor devices has led to innovative technologies, such as High Voltage DC transmission (HVDC) and Flexible AC Transmission System (FACTS), which can be applied in transmission and distribution systems. The technical and economicalBenefits of these technologies represent an alternative to the application in AC systems. Deregulation in the power industry and opening of the market for delivery of cheaper energy to the customers is creating additional requirements for the operation of power systems. HVDC and FACTS offer major advantages in meeting these requirements. .A method for co-ordinated optimum allocation of reactive power in AC/DC power systems by including FACTS controller UPFC, with an objective of minimization of the sum of the squares of the voltage deviations of all the load buses has been proposed in this thesis. The study results show that under contingency conditions, the presence of FACTS controllers has considerable impact on over all system voltage stability and also on power loss minimization.minimization of the sum of the squares of the voltage deviations of all the load buses has been proposed in this thesis. The study results show that under contingency conditions, the presence of FACTS controllers has considerable impact on over all system voltage stability and also on power loss minimization. As power systems grow in their size and interconnections, their complexity increases. For secure operation and control of power systems under normal and contingency conditions, it is essential to provide solutions in real time to the operator in ECC. For real time control of power systems, the conventional algorithmic software available in ECC are found to be inadequate as they are computationally very intensive and not organized to guide the operator during contingency conditions. Artificial Intelligence (AI) techniques such as, Expert systems, Neural Networks, Fuzzy systems are emerging decision support system tools which give fast, though approximate, but acceptable right solutions in real time as they mostly use symbolic processing with a minimum number of numeric computations. The solution thus obtained can be used as a guide by the operator in ECC for power system control. Optimum real and reactive power dispatch play an important role in the day-to-day operation of power systems. Existing conventional Optimal Power Flow (OPF) methods use all of the controls in solving the optimization problem. The operators can not move so many control devices within a reasonable time. In this context an algorithm using fuzzy-expert approach has been proposed in this thesis to curtail the number of control actions, in order to realize real time objectives in voltage/reactive power control. The technique is formulated using membership functions of linguistic variables such as voltage deviations at all the load buses and the voltage deviation sensitivity to control variables. Voltage deviations and controlling variables are translated into fuzzy set notations to formulate the relation between voltage deviations and controlling ability of controlling devices. Control variables considered are switchable VAR compensators, OLTC transformers and generator excitations. A fuzzy rule based system is formed to select the critical controllers, their movement direction and step size. Results show that the proposed approach is effective for improving voltage security to acceptable levels with fewer numbers of controllers. So, under emergency conditions the operator need not move all the controllers to different settings and the solution obtained is fast with significant speedups. Hence, the proposed method has the potential to be integrated for on-line implementation in energy management systems to achieve the goals of secure power system operation. In a deregulated electricity market, it may not be always possible to dispatch all of the contracted power transactions due to congestion of the transmission corridors. System operators try to manage congestion, which otherwise increases the cost of the electricity and also threatens the system security and stability. An approach for alleviation of network over loads in the day-to-day operation of power systems under deregulated environment is presented in this thesis. The control used for overload alleviation is real power generation rescheduling based on Relative Electrical Distance (RED) concept. The method estimates the relative location of load nodes with respect to the generator nodes. The contribution of each generator for a particular over loaded line is first identified , then based on RED concept the desired proportions of generations for the desired overload relieving is obtained, so that the system will have minimum transmission losses and more stability margins with respect to voltage profiles, bus angles and better transmission tariff. The results obtained reveal that the proposed method is not only effective for overload relieving but also reduces the system power loss and improves the voltage stability margin. The presented concepts are better suited for finding the utilization of resources generation/load and network by various players involved in the day-to-day operation of the system under normal and contingency conditions. This will help in finding the contribution by various players involved in the congestion management and the deviations can be used for proper tariff purposes. Suitable computer programs have been developed based on the algorithms presented in various chapters and thoroughly tested. Studies have been carried out on various equivalent systems of practical real life Indian power networks and also on some standard IEEE systems under simulated conditions. Results obtained on a modified IEEE 30 bus system, IEEE 39 bus New England system and four Indian power networks of EHV 24 bus real life equivalent power network, an equivalent of 36 bus EHV Indian western grid, Uttar Pradesh 96 bus AC/DC system and 205 Bus real life interconnected grid system of Indian southern region are presented for illustration purposes.
40

Lin, Tien-Huamr, and 林添華. "A Group-based Congestion Control Algorithm for Active Queue Management." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/06685523336186578602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立中興大學
資訊科學系所
94
In a congested network, an aggressive node could acquire more bandwidth for itself by intentionally increasing the number of flows. This leads to a serious problem of fairness in bandwidth allocation. This problem can be solved by a technique known as virtual queue. In the network, each node employs a virtual queue with an initial drop probability. The probability is dynamically adjusted for each node to acquire a fair share of the bandwidth. However, the virtual-queue technique has a major drawback, it bccomes considerably difficult and impractical to manage a large number of virtual queues, as the number of nodes explodes. In this thesis, we propose an efficient method to deal with this problem. Our idea is to sort nodes into groups on the basis of flow numbers. Nodes with similar flow numbers are placed in a group and they share a single virtual queue for bandwidth allocation. We verify the effectiveness of our proposed method through a simulation tool – ns2. By calculating fairness index, we are able to evaluate the performance of our method operating in various conditions. The simulation results show that the proposed method can effectively reduce the number of virtual queues in use. In addition, it guarantees that all the nodes receive a fair treatment of bandwidth allocation.
41

Lin, Wei-Chung, and 林偉正. "An Improved Random Early Detection (RED) Algorithm for Congestion Control." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/16896950377494718155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
國立中興大學
資訊科學系所
95
Many proposals have been adopted in controlling the congestions in the routers, including Random Early Detection (RED) and Drop-tail, and have shown to improve the loss rate, throughput, fairness, etc. of the network. Although RED algorithm is designed for TCP for a active queue management, we found that when comes to dropping the packets, it treats packets equally, ignoring the effect of the the size of the packets. This results in higher loss rate of packets and lower throughput for smaller packets. In this thesis, we propose to improve the original RED algorithm by differentiating packet sizes and devise RED_average algorithm and further improved PS_average algorithm. We then use ns-2 to simulate the performance of the aforementioned three algorithm.based on three MTU sizes. The results showed that if we take the factor of the packet size into consideration, the RED_average algorithm has a better loss rate and throughput. The PS_average, which takes the average packet size into consideration to adjust the intended loss rate for smaller packates, has a even further improved performance. We have shown that by the above two new algorithms, a better balance for the loss rate for all packets can be achieved, and thus improved utilization of the network resources.
42

Ryu, Jung Ho. "Congestion control and routing over challenged networks." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation is a study on the design and analysis of novel, optimal routing and rate control algorithms in wireless, mobile communication networks. Congestion control and routing algorithms upto now have been designed and optimized for wired or wireless mesh networks. In those networks, optimal algorithms (optimal in the sense that either the throughput is maximized or delay is minimized, or the network operation cost is minimized) can be engineered based on the classic time scale decomposition assumption that the dynamics of the network are either fast enough so that these algorithms essentially see the average or slow enough that any changes can be tracked to allow the algorithms to adapt over time. However, as technological advancements enable integration of ever more mobile nodes into communication networks, any rate control or routing algorithms based, for example, on averaging out the capacity of the wireless mobile link or tracking the instantaneous capacity will perform poorly. The common element in our solution to engineering efficient routing and rate control algorithms for mobile wireless networks is to make the wireless mobile links seem as if they are wired or wireless links to all but few nodes that directly see the mobile links (either the mobiles or nodes that can transmit to or receive from the mobiles) through an appropriate use of queuing structures at these selected nodes. This approach allows us to design end-to-end rate control or routing algorithms for wireless mobile networks so that neither averaging nor instantaneous tracking is necessary, as we have done in the following three networks. A network where we can easily demonstrate the poor performance of a rate control algorithm based on either averaging or tracking is a simple wireless downlink network where a mobile node moves but stays within the coverage cell of a single base station. In such a scenario, the time scale of the variations of the quality of the wireless channel between the mobile user and the base station can be such that the TCP-like congestion control algorithm at the source can not track the variation and is therefore unable to adjust the instantaneous coding rate at which the data stream can be encoded, i.e., the channel variation time scale is matched to the TCP round trip time scale. On the other hand, setting the coding rate for the average case will still result in low throughput due to the high sensitivity of the TCP rate control algorithm to packet loss and the fact that below average channel conditions occur frequently. In this dissertation, we will propose modifications to the TCP congestion control algorithm for this simple wireless mobile downlink network that will improve the throughput without the need for any tracking of the wireless channel. Intermittently connected network (ICN) is another network where the classic assumption of time scale decomposition is no longer relevant. An intermittently connected network is composed of multiple clusters of nodes that are geographically separated. Each cluster is connected wirelessly internally, but inter-cluster communication between two nodes in different clusters must rely on mobile carrier nodes to transport data between clusters. For instance, a mobile would make contact with a cluster and pick up data from that cluster, then move to a different cluster and drop off data into the second cluster. On contact, a large amount of data can be transferred between a cluster and a mobile, but the time duration between successive mobile-cluster contacts can be relatively long. In this network, an inter-cluster rate controller based on instantaneously tracking the mobile-cluster contacts can lead to under utilization of the network resources; if it is based on using long term average achievable rate of the mobile-cluster contacts, this can lead to large buffer requirements within the clusters. We will design and analyze throughput optimal routing and rate control algorithm for ICNs with minimum delay based on a back-pressure algorithm that is neither based on averaging out or tracking the contacts. The last type of network we study is networks with stationary nodes that are far apart from each other that rely on mobile nodes to communicate with each other. Each mobile transport node can be on one of several fixed routes, and these mobiles drop off or pick up data to and from the stationaries that are on that route. Each route has an associated cost that much be paid by the mobiles to be on (a longer route would have larger cost since it would require the mobile to expend more fuel) and stationaries pay different costs to have a packet picked up by the mobiles on different routes. The challenge in this type of network is to design a distributed route selection algorithm for the mobiles and for the stationaries to stabilize the network and minimize the total network operation cost. The sum cost minimization algorithm based on average source rates and mobility movement pattern would require global knowledge of the rates and movement pattern available at all stationaries and mobiles, rendering such algorithm centralized and weak in the presence of network disruptions. Algorithms based on instantaneous contact, on the contrary, would make them impractical as the mobile-stationary contacts are extremely short and infrequent.
text
43

Vallamsundar, Banupriya. "Congestion Control for Adaptive Satellite Communication Systems with Intelligent Systems." Thesis, 2007. http://hdl.handle.net/10012/3295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the advent of life critical and real-time services such as remote operations over satellite, e-health etc, providing the guaranteed minimum level of services at every ground terminal of the satellite communication system has gained utmost priority. Ground terminals and the hub are not equipped with the required intelligence to predict and react to inclement and dynamic weather conditions on its own. The focus of this thesis is to develop intelligent algorithms that would aid in adaptive management of the quality of service at the ground terminal and the gateway level. This is done to adapt both the ground terminal and gateway to changing weather conditions and to attempt to maintain a steady throughput level and Quality of Service (QoS) requirements on queue delay, jitter, and probability of loss of packets. The existing satellite system employs the First-In-First-Out routing algorithm to control congestion in their networks. This mechanism is not equipped with adequate ability to contend with changing link capacities, a common result due to bad weather and faults and to provide different levels of prioritized service to the customers that satisfies QoS requirements. This research proposes to use the reported strength of fuzzy logic in controlling highly non-linear and complex system such as the satellite communication network. The proposed fuzzy based model when integrated into the satellite gateway provides the needed robustness to the ground terminals to comprehend with varying levels of traffic and dynamic impacts of weather.
44

Ho, Cheng-Yuan, and 何承遠. "Design and Performance Evaluation of an End-to-end Congestion Control Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/78906900024523365472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
博士
國立交通大學
資訊科學與工程研究所
96
The success of the Internet can be attributed to the large number of useful applications which are easily executed by a user for running on the Internet. Transmission Control Protocol (TCP) is a widely deployed end-to-end transport protocol in the current Internet. This is because TCP provides an acceptable service with a reliable data transport as well as controls the connection's bandwidth usage to avoid network congestion for two end hosts in the Internet. Nowadays, the research of TCP/IP is still a hot topic in both the academia and the industry. With the fast growth of Internet hardware, technologies, and applications, the network bandwidth of a user is getting higher and wireless links are more and more popular everywhere. This will make Internet traffic increase quickly and the ways of data transport vary. Also, users' demands for network performance are getting stricter. Facing these challenges, how to efficiently utilize network resources, how to work well in both high bandwidth network and heterogeneous network (mixed with wired and wireless networks), and how to satisfy users' requests are essential issues to a successful congestion control mechanism. In addition, if the modification of TCP's congestion control does not catch up the Internet change, the performance bottleneck will soon be TCP itself. Up to now, in order to improve network utilization, TCP has several implementation versions which can be classified into two categories, loss-based TCPs and delay-based TCPs. Although it has been demonstrated that the delay-based TCP outperforms the loss-based TCP in the aspects of overall network utilization, stability, fairness, throughput, etc., in the real Internet, loss-based TCPs are still the mainstream and remained as the dominant algorithm used in practice. Furthermore, the implementation of a loss-based TCP is easier than that of a delay-based TCP, and a loss-based TCP could get more resources than a delay-based TCP could when they coexist in the same network, so many studies focus on loss-based TCP algorithms, and one version of loss-based TCP mechanisms, TCP SACK, has been widely deployed on the Internet. However, in fact, loss-based and delay-based TCPs have their own advantages and shortcomings. Hence, there is an issue ``how do we design the algorithm and architecture of TCP?'' In this dissertation, we propose a novel end-to-end congestion control mechanism called Medley TCP that is able to efficiently utilize Internet resources, adapt itself to a new network circumstance, satisfy users' requests, and accommodate shortcomings of the conventional TCP. Medley TCP differs from the traditional TCP extremely in that we redesign the algorithm and architecture of the whole congestion control mechanism of TCP. Specifically, Medley TCP tries to combine advantages and characteristics of both loss-based and delay-based TCPs, and therefore incorporates a scalable delay-based component into a loss-based TCP algorithm. This scalable delay-based component has a rapid window increase rule when the network is sensed to be under-utilized and gracefully reduces the sending rate once the bottleneck queue is built. Therefore, Medley TCP connections will react faster and better to high BDP networks and improve the overall performance. Moreover, we utilize the innate nature of Medley TCP to detect congestion losses or random packet losses precisely. Through the packet loss differentiation, Medley TCP reacts appropriately to the losses and consequently the throughput of connection over heterogeneous networks can be significantly improved. Extensive experiments of a network simulator and real world Internet traffic measurements have been conducted and show that Medley TCP not only can improve the performance significantly over high BDP networks and heterogeneous networks but also can keep the good characteristics of delay-based TCPs in the aspects of overall network utilization, stability, fairness, throughput, and so on. Importantly, Medley TCP does not cause any detrimental effects to TCP SACK, and vice versa when they share same resources in the networks: that is, Medley TCP connections achieve higher throughput by using bandwidth that will not be used by SACK connections anyway, not ``stealing'' bandwidth from SACK's connections. Finally, Medley TCP only involves modification at the sender without requiring changes of the receiver protocol stack or intermediate router nodes. As a result, it can be easily deployed in the current Internet.
45

Chiu, Kuo-Tung, and 邱國棟. "Performance Analysis of congestion control Algorithm for the Medium Access controller of 802.11 WLANs." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/91268554109053285725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
中華大學
電機工程學系(所)
98
Abstract This dissertation is proposing an improved mechanism to better eliminate the predicament of network congestion occurs in Wireless LAN (WLAN). As the number of workstations in a crowded network increases, the probabilities of collision increase as well. From the IEEE 802.11 DCF algorithm, queuing of station transmission is scheduled based on the CSMA/CA and Binary Exponential Back-off scheme. Although Back-off reduce number of collisions in the network, its effectiveness is restrained because each individual workstation is unaware of the current network congest state and places transmitting data onto the channel indiscriminately, which causes a higher collision probabilities. According to the paper, a Distributed Contention Control (DCC) method is suggested to correct the problem with collision avoidance in the IEEE 802.11 DCF system. Before a workstation transmits, DCC mechanism evaluates the current Slot usage rate. A low Slot usage rate corresponds to an intense channel contention. To avoid collision, DCC would Defer Access to the transmission channel for a period of time and rearrange the transmission moment for the workstations. Even though this method is capable of decreasing collision possibility and thus alleviating congestion, it causes an unstable condition in channel access for the workstations. This is due to that DCC can only detect the “presence” of data transmission. A DCC system would generate unnecessary waiting by stalling transmission even if a merely small number of workstations exist. Take this concern into consideration, a mechanism as an improvement to the DCC scheme is proposed in our paper, the Modifier DCC (M-DCC) method. We narrow the requirement to include the condition, “when multiple workstations are competing for access,” so that only when this condition is true, will access be deferred. In addition, a Markov Chain mathematical model is used to analyze and compare between DCF, DCC, and the M-DCC method. Through MATLAB evaluation, it is verified that the M-DCC method can not only enhance DCC efficiency, effectively diminish congestion level, reduce the collision probabilities, but also boost data throughput.
46

Wang, Wun-jhang, and 汪汶樟. "Study on TCP Congestion Window Control Algorithm to Improve TCP Performance in Heterogeneous Networks." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/32432290741941024673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
碩士
大葉大學
資訊工程學系碩士班
99
Recently, the TCP performance in the heterogeneous network has become a focus of the study. In traditional wired networks, the transmission error rate is very low. Therefore, the packet loss resulting in the transmission performance degradation is due to the network congestion. However, the Internet today is no longer just connected with the wired networks. Both the wired and wireless links may exist in a TCP connection. In wireless links due to interference or noise of the wireless transmission channel, the bit error rate (BER) is much higher than the wired links. The higher BER results in the increase of the packet loss. If the TCP algorithm still assumes the packet loss is due to the network congestion and reduces the congestion windows, the performance of TCP connection will be serious degradation. Therefore, this thesis proposed an algorithm to detect the network congestion and BER. The proposed algorithm records the TCP packet sending time and the corresponding ACK packet received time to calculate a parameter which is defined as the RTT ratio. With the RTT ratio, the proposed algorithm determines the reason of packet loss, and adjust the TCP congestion window and slow start threshold according to the determination. The NS2 simulation results show that the proposed algorithm performs much better than the TCP Reno in a high BER environment at the cost of little degradation of performance in a small BER environment. Key Words:TCP Reno, Congestion control, Congestion Windows, RTT
47

Lierkamp, Darren University of Ballarat. "A New ramp metering control algorithm for optimizing freeway travel times." 2006. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/12726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
"In many cities around the world traffic congestion has been increasing faster than can be dealt with by new road construction. To resolve this problem traffic management devices and technology such as ramp meters are increasingly being utilized."--leaf 1.
Masters of Information Technology
48

Lierkamp, Darren. "A new ramp metering control algorithm for optimizing freeway travel times." Thesis, 2006. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/63788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
"In many cities around the world traffic congestion has been increasing faster than can be dealt with by new road construction. To resolve this problem traffic management devices and technology such as ramp meters are increasingly being utilized."--leaf 1.
Masters of Information Technology
49

Lierkamp, Darren. "A New ramp metering control algorithm for optimizing freeway travel times." 2006. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/14605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
"In many cities around the world traffic congestion has been increasing faster than can be dealt with by new road construction. To resolve this problem traffic management devices and technology such as ramp meters are increasingly being utilized."--leaf 1.
Masters of Information Technology
50

張康維. "A distributed/parallel algorithm for the optimal congestion control of a single destination data network." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/14997934354440983409.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography