Dissertations / Theses on the topic 'Computer network'

To see the other types of publications on this topic, follow the link: Computer network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alexander, David. "A Network Metadata Infrastructure for Locating Network Devices." Ohio University / OhioLINK, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1088176648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brande, Julia K. Jr. "Computer Network Routing with a Fuzzy Neural Network." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/29685.

Full text
Abstract:
The growing usage of computer networks is requiring improvements in network technologies and management techniques so users will receive high quality service. As more individuals transmit data through a computer network, the quality of service received by the users begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing. A more effective method for routing data through a computer network can assist with the new problems being encountered with today's growing networks. Effective routing algorithms use various techniques to determine the most appropriate route for transmitting data. Determining the best route through a wide area network (WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices present on the network. The most relevant routing information involves various measures that are often obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it maintains the ability to learn in dynamic situations. Once the neural network is initially designed, any alterations in the computer routing environment can easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is essential in today's rapidly growing and changing computer networks. These techniques, fuzzy reasoning and neural networks, when combined together provide a very effective routing algorithm for computer networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the computer network migrates from a stable network to a more variable one. The advantages of applying this fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Duff, Kevin Craig. "Network-layer reservation TDM for ad-hoc 802.11 networks." Thesis, Rhodes University, 2008. http://eprints.ru.ac.za/1146/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Burchett, Ian. "Quantifying Computer Network Security." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1118.

Full text
Abstract:
Simplifying network security data to the point that it is readily accessible and usable by a wider audience is increasingly becoming important, as networks become larger and security conditions and threats become more dynamic and complex, requiring a broader and more varied security staff makeup. With the need for a simple metric to quantify the security level on a network, this thesis proposes: simplify a network’s security risk level into a simple metric. Methods for this simplification of an entire network’s security level are conducted on several characteristic networks. Identification of computer network port vulnerabilities from NIST’s Network Vulnerability Database (NVD) are conducted, and via utilization of NVD’s Common Vulnerability Scoring System values, composite scores are created for each computer on the network, and then collectively a composite score is computed for the entire network, which accurately represents the health of the entire network. Special concerns about small numbers of highly vulnerable computers or especially critical members of the network are confronted.
APA, Harvard, Vancouver, ISO, and other styles
5

Somalingam, Ramesh Ramvel. "Network performance monitoring for multimedia networks." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23939.

Full text
Abstract:
Multimedia applications require certain guarantees from the underlying system concerning quality of various parameters, such as the throughput, delay, loss rate etc. We assume that such quality of service (QoS) parameters are normally agreed upon for the duration of a session, and should be maintained throughout that session. If a QoS violation occurs, possibly due to the temporary overload of one of the system components, either the user will notice this and may request a renegotiation of the quality and cost parameters, or the system will automatically do some internal reconfiguration in order to recover from the problem. In the latter case, the system needs to continuously monitor the relevant performance parameters. In this thesis, we develop means for performing such monitoring in the context of a News-on-Demand application. We have designed and implemented a distributed QoS monitoring facility that can be used by distributed multimedia applications for QoS monitoring and QoS violation detection. The system consists of a measurement component and an administrative component. The measurement component is responsible for continuously measuring the end-to-end QoS parameters of connections between the server and client application, while the administrative component is responsible for the overall administration of the monitoring system, which includes raising QoS violation notifications if the quality of service is violated.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Anthony Eli. "On network coding capacity : matroidal networks and network capacity regions." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62657.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 68-70).
One fundamental problem in the field of network coding is to determine the network coding capacity of networks under various network coding schemes. In this thesis, we address the problem with two approaches: matroidal networks and capacity regions. In our matroidal approach, we prove the converse of the theorem which states that, if a network is scalar-linearly solvable then it is a matroidal network associated with a representable matroid over a finite field. As a consequence, we obtain a correspondence between scalar-linearly solvable networks and representable matroids over finite fields in the framework of matroidal networks. We prove a theorem about the scalar-linear solvability of networks and field characteristics. We provide a method for generating scalar-linearly solvable networks that are potentially different from the networks that we already know are scalar-linearly solvable. In our capacity region approach, we define a multi-dimensional object, called the network capacity region, associated with networks that is analogous to the rate regions in information theory. For the network routing capacity region, we show that the region is a computable rational polytope and provide exact algorithms and approximation heuristics for computing the region. For the network linear coding capacity region, we construct a computable rational polytope, with respect to a given finite field, that inner bounds the linear coding capacity region and provide exact algorithms and approximation heuristics for computing the polytope. The exact algorithms and approximation heuristics we present are not polynomial time schemes and may depend on the output size.
by Anthony Eli Kim.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Geha, Abbas. "Computer enhanced network design." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.344069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vega, Juan Carlos. "Computer network operations methodology." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FVega.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Post, David L. "Network Management: Assessing Internet Network-Element Fault Status Using Neural Networks." Ohio : Ohio University, 2008. http://www.ohiolink.edu/etd/view.cgi?ohiou1220632155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dahlgren, Jeremy. "Efficient failure detection protocols for point-to-point communication networks /." Online version of thesis, 2004. https://ritdml.rit.edu/dspace/handle/1850/2752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Otten, Fred. "Network simulation for professional audio networks." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1017935.

Full text
Abstract:
Audio Engineers are required to design and deploy large multi-channel sound systems which meet a set of requirements and use networking technologies such as Firewire and Ethernet AVB. Bandwidth utilisation and parameter groupings are among the factors which need to be considered in these designs. An implementation of an extensible, generic simulation framework would allow audio engineers to easily compare protocols and networking technologies and get near real time responses with regards to bandwidth utilisation. Our hypothesis is that an application-level capability can be developed which uses a network simulation framework to enable this process and enhances the audio engineer’s experience of designing and configuring a network. This thesis presents a new, extensible simulation framework which can be utilised to simulate professional audio networks. This framework is utilised to develop an application - AudioNetSim - based on the requirements of an audio engineer. The thesis describes the AudioNetSim models and implementations for Ethernet AVB, Firewire and the AES- 64 control protocol. AudioNetSim enables bandwidth usage determination for any network configuration and connection scenario and is used to compare Firewire and Ethernet AVB bandwidth utilisation. It also applies graph theory to the circular join problem and provides a solution to detect circular joins.
APA, Harvard, Vancouver, ISO, and other styles
12

Sundaresan, Latha. "Simple network management protocol /." Online version of thesis, 1995. http://hdl.handle.net/1850/12254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kusetoğulları, Hüseyin. "Network routing optimisation and effective multimedia transmission to enhance QoS in communication networks." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/46802/.

Full text
Abstract:
With the increased usage of communication services in networks, finding routes for reliable transmission and providing effective multimedia communication have become very challenging problems. This has been a strong motivation to examine and develop methods and techniques to find routing paths efficiently and to provide effective multimedia communication. This thesis is mainly concerned with designing, implementing and adapting intelligent algorithms to solve the computational complexity of network routing problems and testing the performance of intelligent algorithms’ applications. It also introduces hybrid algorithms which are developed by using the similarities of genetic algorithm (GA) and particle swarm optimization (PSO) intelligent systems algorithms. Furthermore, it examines the design of a new encoding/decoding method to offer a solution for the problem of unachievable multimedia information in multimedia multicast networks. The techniques presented and developed within the thesis aim to provide maximum utilization of network resources for handling communication problems. This thesis first proposes GA and PSO implementations which are adapted to solve the single and multi-objective functions in network routing problems. To offer solutions for network routing problems, binary variable-length and priority based encoding methods are used in intelligent algorithms to construct valid paths or potential solutions. The performance of generation operators in GA and PSO is examined and analyzed by solving the various shortest path routing problems and it is shown that the performance of algorithms varies based on the operators selected. Moreover, a hybrid algorithm is developed based on the lack of search capability of intelligent algorithms and implemented to solve the single objective function. The proposed method uses a strategy of sharing information between GA and PSO to achieve significant performance enhancement to solve routing optimization problems. The simulation results demonstrate the efficiency of the hybrid algorithm by optimizing the shortest path routing problem. Furthermore, intelligent algorithms are implemented to solve a multi-objective function which involves more constraints of resources in communication networks. The algorithms are adapted to find the multi-optimal paths to provide effective multimedia communication in lossy networks. The simulation results verify that the implemented algorithms are shown as efficient and accurate methods to solve the multi-objective function and find multi-optimal paths to deliver multimedia packets in lossy networks. Furthermore, the thesis proposes a new encoding/decoding method to maximize throughput in multimedia multicast networks. The proposed method is combined with two most used Multiple Description Coding (MDC) methods. The utilization of the proposed method is discussed by comparing two the MDC methods. Through analyzing the simulation results using these intelligent systems algorithms, it has been shown that feasible solutions can be obtained by optimizing complex network problems. Moreover, the methods proposed and developed, which are hybrid algorithms and the encoding/decoding method also demonstrate their efficiency and effectiveness as compared with other techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhao, Wenrui. "Routing and Network Design in Delay Tolerant Networks." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14085.

Full text
Abstract:
Delay tolerant networks (DTNs) are a class of emerging networks that exhibit significantly different characteristics from today's Internet, such as intermittent connectivity, large delay, and high loss rates. DTNs have important applications in disaster relief, military, rural Internet access, environmental sensing and surveillance, interplanetary communication, underwater sensing, and vehicular communication. While not the common case for networking, DTNs represent some of the most critical cases, where the ability to communicate can make a huge difference for human lives. Supporting effective communication in DTNs, however, is challenging. First, with intermittent connectivity, DTNs are often extremely limited in capacity. Second, given resource limitations and uncertainty in DTNs, it is critical to deliver data efficiently and robustly. The situation is especially acute for multicast which sends data to multiple destinations. This thesis seeks to address these two issues. To enhance network capacity in DTNs, we propose a message ferrying scheme that exploits the use of special mobile nodes (called message ferries) and controlled device mobility to deliver data. Message ferries are utilized to transport data via mobility between sources and destinations. We develop a foundation for the control of the mobility of message ferries, and nodes if possible, to cooperatively deliver data under a variety of conditions. We also study another approach which deploys new nodes called throwboxes to enhance capacity. Throwboxes are small and inexpensive wireless devices. By relaying data between mobile nodes, throwboxes are able to create data transfer opportunities that otherwise would not exist. We systematically investigate the issues of deployment and routing, and develop algorithms for various deployment and routing approaches. Based on extensive evaluation, we obtain several findings to guide the design and operation of throwbox-augmented DTNs. To address the issue of efficient and robust data delivery, we focus on DTN multicasting. Given the unique characteristics of DTNs, traditional solutions such as IP multicast can not be simply ported to DTNs. We identify the limitations of IP multicast semantics in DTNs and define new semantic models for DTN multicast. Based on these semantic models, we develop and evaluate several multicast routing algorithms with different routing strategies.
APA, Harvard, Vancouver, ISO, and other styles
15

Xie, Jing. "A Temporal Network Calculus for Performance Analysis of Computer Networks." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14619.

Full text
Abstract:
One inevitable trend of network development is to deliver information with various traffic characteristics and diverse Quality of Service (QoS) requirements. In response to the continually growing demand for more bandwidth, network performance analysis is needed to optimize the performance of existing technologies and evaluate the efficiency of new ones. Performance analysis investigates how traffic management mechanisms deployed in the network affect the resource allocation among users and the performance which the users experience. This topic can be investigated by constructing models of traffic management mechanisms and studying how these mechanisms perform under various types of network traffic. To this end, appropriate mathematical models are needed to characterize the traffic management mechanisms which we are interested in and represent different types of network traffic. In addition, fundamental properties which can be employed to manipulate the models should be explored. Over the last two decades a relatively new theory, stochastic network calculus, has been developed to enable mathematical performance analysis of computer networks. Particularly, several related processes are mathematically modeled, including the arrival process, the waiting process and the service process. This theory can be applied to the derivation and calculation of several performance metrics such as the backlog bound and the delay bound. The most attractive contribution of stochastic network calculus is to characterize the behavior of a process based on some bound on the complementary cumulative distribution function (CCDF). The behavior of a computer network is often subject to many irregularities and stochastic fluctuations. The models based on the bound on the CCDF are not very accurate, while they are more feasible for abstracting computer network systems and representing various types of network traffic. This thesis is devoted to investigate the performance of networks from the temporal perspective. Specifically, the traffic arrival process characterizes the distribution of the cumulative inter-arrival time and the service process describes the distribution of the cumulative service time. Central to finding a bound on the CCDF of the cumulative interarrival time and the cumulative service time, several variations of the traffic characterization and the service characterization are developed. The purpose of developing several variations to characterize the same process is to facilitate the derivation and calculation of performance metrics. In order to derive and calculate the performance metrics, four fundamental properties are explored, including the service guarantees, the output characterization, the concatenation property and the superposition property. The four properties can be combined differently when deriving the performance metrics of a single node, a series of nodes or the superposition flow. Compared to the available literature on stochastic network calculus which mainly focuses on studying network performance in the spacedomain, this work develops a generic framework for mathematically analyzing network performance in the time-domain. The potential applications of this temporal approach include the wireless networks and the multi-access networks. Furthermore, the complete procedure of concretizing the generic traffic models and service models is presented in detail. It reveals the key of applying the developed temporal network calculus approach to network performance analysis, i.e., to derive the bounding function which is the upper bound on the tail probability of a stochastic process. Several mathematical methods are introduced, such as the martingale, the moment generating function (MGF) and a concentration theory result
APA, Harvard, Vancouver, ISO, and other styles
16

Hall, Kristopher Joseph. "Thwarting Network Stealth Worms in Computer Networks through Biological Epidemiology." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/27726.

Full text
Abstract:
This research developed a system, Rx, to provide early identification and effective control of network stealth worms in digital networks through techniques based on biological epidemiology. Network stealth worms comprise a class of surreptitious, self-propagating code that spread over network connections by exploiting security vulnerabilities in hosts. Past outbreaks due to traditional worms subverted hundreds of thousands of machines. Network stealth worms exacerbate that threat by using clandestine methods to maintain a persistent presence in the network. Biological epidemiology was shown to support the real-time detection, characterization, forecasting, and containment of network stealth worms. Epidemiology describes a scientific methodology in biology that seeks to understand, explain, and control disease. Bio-mathematical modeling led to the development of a mechanism for digital networks to identify worm infection behavior buried in anomaly data, to characterize a worm, and to forecast the temporal spread of a worm. Demographic analysis of the infected hosts revealed the subset of vulnerable machines within the population. The automated response of advanced quarantine used this information to control the spread of an identified worm by isolating both infected and vulnerable machines. The novel contributions of this research included the identification of a network stealth worm at the network-level based on end-host reports while simultaneously characterizing and forecasting the spread of the worm. Additionally, this task offered the technique of advanced quarantine through demographic analysis of the population. This work resulted in a scalable, fault-tolerant strategy that dramatically enhanced the survival rate of network hosts under attack by a stealth worm. Moreover, this approach did not require new hardware, changes to existing protocols, or participation outside the implementing organization. This research showed application to a wider range of challenges. The bio-mathematical models are extensible, allowing Rx to respond to variations on the self-propagating code presented here. The approach is applicable to other forms of malware beyond self-propagating code by interchanging the epidemic model with one more appropriate. Lastly, the strategy allowed anomaly detectors to be sensitive to lower reporting thresholds and a variety of often benign yet potentially useful events.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
17

McKenzie, Neil R. "The Cranium network interface architecture : support for message passing on adaptive packet routing networks /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jannotti, John 1974. "Network layer support for overlay networks." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29274.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 98-103).
Overlay networks are virtual networks formed by cooperating nodes that share an underlying physical network. They represent a flexible and deployable approach for applications to obtain new network semantics without modification of the underlying network, but they suffer from efficiency concerns. This thesis presents two new primitives for implementation in the network layer (i.e., the routers of the physical network). These primitives support the efficient operation and construction of overlay networks. Packet Reflection allows end hosts to request that routers perform specialized routing and duplication for certain packets. Path Painting allows multiple end hosts to determine where their disparate paths to a rendezvous point meet, in order to facilitate overlay topology building that reflects the topology of the underlying network. Both primitives can be incrementally deployed for incremental benefit. This thesis describes a variety applications of these primitives: application level multicast systems with various semantics, an extended Internet Indirect Infrastructure with latency benefits over the original proposal, and an extension to Chord which would allows faster lookups. Experimental results on simulated topologies indicate that when all routers support the proposed primitives, less that 5% overhead (in terms of link usage and latency) remains in two common overlay network usage scenarios. In addition, the benefits gained from deployment are significant even at low deployment levels. At approximately 25% deployment, the primitives have reduced overhead by over 50%. When intelligent deployment strategies are used, link usage overhead is less than 30% at less than 10% deployment. Finally, the results indicate that these benefits affect the area local to the deployed routers, providing a deployment incentive to independent networks.
by John Jannotti.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, MinJi Ph D. Massachusetts Institute of Technology. "Network coding for robust wireless networks." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71276.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 157-167).
Wireless networks and communications promise to allow improved access to services and information, ubiquitous connectivity, and mobility. However, current wireless networks are not well-equipped to meet the high bandwidth and strict delay requirements of future applications. Wireless networks suffer from frequent losses and low throughput. We aim to provide designs for robust wireless networks. This dissertation presents protocols and algorithms that significantly improve wireless network performance and effectively overcome interference, erasures, and attacks. The key idea behind this dissertation is in understanding that wireless networks are fundamentally different from wired networks, and recognizing that directly applying techniques from wired networks to wireless networks limits performance. The key ingredient underlying our algorithms and protocols is network coding. By recognizing the algebraic nature of information, network coding breaks the convention of routing networks, and allows mixing of information in the intermediate nodes and routers. This mixing has been shown to have numerous performance benefits, e.g. increase in throughput and robustness against losses and failures. We present three protocols and algorithms, each using network coding to harness a different characteristic of the wireless medium. We address the problem of interference, erasures, and attacks in wireless networks with the following network coded designs. -- Algebraic NC exploits strategic interference to provide a distributed, randomized code construction for multi-user wireless networks. Network coding framework simplifies the multi-user wireless network model, and allows us to describe the multi-user wireless networks in an algebraic framework. This algebraic framework provides a randomized, distributed code construction, which we show achieves capacity for multicast connections as well as a certain set of non-multicast connections. -- TCP/NC efficiently and reliably delivers data over unreliable lossy wireless networks. TCP, which was designed for reliable transmission over wired networks, often experiences severe performance degradation in wireless networks. TCP/NC combines network coding's erasure correction capabilities with TCP's congestion control mechanism and reliability. We show that TCP/NC achieves significantly higher throughput than TCP in lossy networks; therefore, TCP/NC is well suited for reliable communication in lossy wireless networks. -- Algebraic Watchdog takes advantage of the broadcast nature of wireless networks to provide a secure global self-checking network. Algebraic Watchdog allows nodes to detect malicious behaviors probabilistically, and police their neighbors locally using overheard messages. Unlike traditional detection protocols which are receiver-based, this protocol gives the senders an active role in checking the nodes downstream. We provide a trellis-based inference algorithm and protocol for detection, and analyze its performance. The main contribution of this dissertation is in providing algorithms and designs for robust wireless networks using network coding. We present how network coding can be applied to overcome the challenges of operating in wireless networks. We present both analytical and simulation results to support that network coded designs, if designed with care, can bring forth significant gains, not only in terms of throughput but also in terms of reliability, security, and robustness.
by MinJi Kim.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Ray, Gavin Peter. "Computer network analysis and optimisation." Thesis, University of Plymouth, 1993. http://hdl.handle.net/10026.1/1639.

Full text
Abstract:
This thesis presents a study and analysis of the major influences on network cost and their related performance. New methods have been devised to find solutions to network optimisation problems particular to the AT&T ISTEL networks in Europe and these are presented together with examples of their successful commercial application. Network performance is seen by the user in terms of network availability and traffic delay times. The network performance is influenced by many parameters, the dominating influences typically being the number of users accessing the network, the type of traffic demands they place upon it and the particular network configuration itself. The number of possible network configurations available to a network designer is vast if the full range of currently available equipment is taken into account. The aim of this research has been to assist in the selection of most suitable network designs for optimum performance and cost. This thesis looks at the current differing network technologies, their performance characteristics and the issues pertinent to any network design and optimisation procedures. A distinction is made between the network equipment providing user 'access' and that which constitutes the cross country, or *core\ data transport medium. This partitioning of the problem is exploited with the analysis concentrating on each section separately. The access side of the AT&T ISTEL - UK network is used as a basis for an analysis of the general access network. The aim is to allow network providers to analyse the root cause of excessive delay problems and find where small adjustments to access configurations might lead to real performance improvements from a user point of view. A method is developed to allow statistical estimates of performance and quality of service for typical access network configurations. From this a general method for the optimisation of cost expenditure and performance improvement is proposed. The optimisation of both circuit switched and packet switched computer networks is shown to be difficult and is normally tackled by the use of complex procedures on mainframe computers. The new work carried out in this study takes a fresh look at the basic properties of networks in order to develop a new heuristic method for the design and optimisation of circuit switched core networks on a personal computer platform. A fully functional design system was developed that implements time division multiplexed core network design. The system uses both a new heuristic method for improving the quality of the designs and a new 'speed up' algorithm for reducing times to find feasible routes, thereby dramatically improving overall design times. The completed system has since been used extensively to assist in the design of commercial networks across Europe.
APA, Harvard, Vancouver, ISO, and other styles
21

Lomas, Thomas Mark Angus. "Aspects of computer network security." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

NAVAS, JOSE RICARDO PORTILHO. "TOPICS IN COMPUTER NETWORK DESIGN." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1985. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=14053@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
TELECOMUNICAÇÕES BRASILEIRAS S/A
O trabalho propõe um método heurístico para a análise de modelos de redes de filas fechadas, denominado algoritmo linearizado de redes simplificado (ALS), o qual é o resultado de uma simplificação no Algoritmo Linearizador original. O ALS mostrou-se interessante na análise de redes de grande porte (redes com mais de 10 classes de usuários) por apresentar um tempo de processamento bastante pequeno, se comparado com o tempo do Algoritmo Linearizador e de outros algoritmos existentes na literatura. É apresentado também a definição de um método aproximado para a integração dos mecanismos de encaminhamento e controle de fluxo em redes de computadores comutados por pacote. Resultados de simulação mostram que com esse esquema é possível utilizar de uma maneira bastante eficiente os recursos da rede.
This work presents a heuristic method called Simplifield Linearizer Algorithm for analyisis of closed queueing network models. The proposed method from some simplifications introduced in the original Linearizer Algorithm. The Simplified Linearizer Algorithm is quite useful in the analysis of large network (network with more than ten user classes) due to its smaller processing time as compared with the processing time required by the Linearizer Algorithm and others describ in the literature. Also, this work defines an approximate method for the integration of the routing and flow control mechanism in packet switching computer networks. Simulation results shom that this integration scheme improves the efficiency associated with the utilization of the network resources.
APA, Harvard, Vancouver, ISO, and other styles
23

Alsebae, Alaa. "Network coding for computer networking." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/72647/.

Full text
Abstract:
Conventional communication networks route data packets in a store-and-forward mode. A router buffers received packets and forwards them intact towards their intended destination. Network Coding (NC), however, generalises this method by allowing the router to perform algebraic operations on the packets before forwarding them. The purpose of NC is to improve the network performance to achieve its maximum capacity also known as max-flow min-cut bound. NC has become very well established in the field of information theory, however, practical implementations in real-world networks is yet to be explored. In this thesis, new implementations of NC are brought forward. The effect of NC on flow error control protocols and queuing over computer networks is investigated by establishing and designing a mathematical and simulation framework. One goal of such investigation is to understand how NC technique can reduce the number of packets required to acknowledge the reception of those sent over the network while error-control schemes are employed. Another goal is to control the network queuing stability by reducing the number of packets required to convey a set of information. A custom-built simulator based on SimEvents® has been developed in order to model several scenarios within this approach. The work in this thesis is divided into two key parts. The objective of the first part is to study the performance of communication networks employing error control protocols when NC is adopted. In particular, two main Automatic Repeat reQuest (ARQ) schemes are invoked, namely the Stop-and-Wait (SW) and Selective Repeat (SR) ARQ. Results show that in unicast point-to point communication, the proposed NC scheme offers an increase in the throughput over traditional SW ARQ between 2.5% and 50.5% at each link, with negligible decoding delay. Additionally, in a Butterfly network, SR ARQ employing NC achieves a throughput gain between 22% and 44% over traditional SR ARQ when the number of incoming links to the intermediate node varies between 2 and 5. Moreover, in an extended Butterfly network, NC offered a throughput increase of up to 48% under an error-free scenario and 50% in the presence of errors. Despite the extensive research on synchronous NC performance in various fields, little has been said about its queuing behaviour. One assumption is that packets are served following a Poisson distribution. The packets from different streams are coded prior to being served and then exit through only one stream. This study determines the arrival distribution that coded packets follow at the serving node. In general this leads to study general queuing systems of type G/M/1. Hence, the objective of the second part of this study is twofold. The study aims to determine the distribution of the coded packets and estimate the waiting time faced by coded packets before their complete serving process. Results show that NC brings a new solution for queuing stability as evidenced by the small waiting time the coded packets spend in the intermediate node queue before serving. This work is further enhanced by studying the server utilization in traditional routing and NC scenarios. NC-based M/M/1 with finite capacity K is also analysed to investigate packet loss probability for both scenarios. Based on the results achieved, the utilization of NC in error-prone and long propagation delay networks is recommended. Additionally, since the work provides an insightful prediction of particular networks queuing behaviour, employing synchronous NC can bring a solution for systems’ stability with packet-controlled sources and limited input buffers.
APA, Harvard, Vancouver, ISO, and other styles
24

Baronikian, Haig. "Modelling and analysis of a computer conferencing system." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Skaria, Sherin, and Fazely Hamedani Amir Reza. "Network Security Issues, Tools for Testing Security in Computer Network and Development Solution for Improving Security in Computer Network." Thesis, Halmstad University, Halmstad University, Halmstad University, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-4396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Nguyen, Thanh Vinh. "Content distribution networks over shared infrastructure a paradigm for future content network deployment /." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060509.094632/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Irwin, Barry Vivian William. "A framework for the application of network telescope sensors in a global IP network." Thesis, Rhodes University, 2011. http://hdl.handle.net/10962/d1004835.

Full text
Abstract:
The use of Network Telescope systems has become increasingly popular amongst security researchers in recent years. This study provides a framework for the utilisation of this data. The research is based on a primary dataset of 40 million events spanning 50 months collected using a small (/24) passive network telescope located in African IP space. This research presents a number of differing ways in which the data can be analysed ranging from low level protocol based analysis to higher level analysis at the geopolitical and network topology level. Anomalous traffic and illustrative anecdotes are explored in detail and highlighted. A discussion relating to bogon traffic observed is also presented. Two novel visualisation tools are presented, which were developed to aid in the analysis of large network telescope datasets. The first is a three-dimensional visualisation tool which allows for live, near-realtime analysis, and the second is a two-dimensional fractal based plotting scheme which allows for plots of the entire IPv4 address space to be produced, and manipulated. Using the techniques and tools developed for the analysis of this dataset, a detailed analysis of traffic recorded as destined for port 445/tcp is presented. This includes the evaluation of traffic surrounding the outbreak of the Conficker worm in November 2008. A number of metrics relating to the description and quantification of network telescope configuration and the resultant traffic captures are described, the use of which it is hoped will facilitate greater and easier collaboration among researchers utilising this network security technology. The research concludes with suggestions relating to other applications of the data and intelligence that can be extracted from network telescopes, and their use as part of an organisation’s integrated network security systems
APA, Harvard, Vancouver, ISO, and other styles
28

Cziva, Richard. "Towards lightweight, low-latency network function virtualisation at the network edge." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/30758/.

Full text
Abstract:
Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity. To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network. In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics. The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal.
APA, Harvard, Vancouver, ISO, and other styles
29

alyanbaawi, ashraf. "DESIGN OF EFFICIENT MULTICAST ROUTING PROTOCOLS FOR COMPUTER NETWORKS." OpenSIUC, 2020. https://opensiuc.lib.siu.edu/dissertations/1775.

Full text
Abstract:
Multicasting can be done in two different ways: source based tree approach andshared tree approach. Shared tree approach is preferred over source-based treeapproach because in the later construction of minimum cost tree per source is neededunlike a single shared tree in the former approach. However, in shared tree approach asingle core needs to handle the entire traffic load resulting in degraded multicastperformance. Besides, it also suffers from „single point failure‟. Multicast is acommunication between one or multiple senders and multiple receivers, which used asa way of sending IP datagrams to a group of interested receivers in one transmission.Core-based trees major concerns are core selection and core as single point of failure.The problem of core selection is to choose the best core or cores in the network toimprove the network performance.In this dissertation we propose 1) a multiple core selection approach for core-based tree multicasting, senders can select different cores to have an efficient loadbalanced multicore multicasting. It will overcome any core failure as well. 2) Novel andefficient schemes for load shared multicore multicasting are presented. Multiple coresare selected statically, that is, independent of any existing multicast groups and also theselection process is independent of any underlying unicast protocol. Some of theselected cores can be used for fault- tolerant purpose also to guard against any possible core failures. 3) We have presented two novel and efficient schemes forgroup-based load shared multicore multicasting in which members of a multicast groupuse the same core tree for their multicasting. 4) We also presented two schemes aim atachieving low latency multicasting along with load sharing for delay sensitive multicastapplications. Besides, we have presented a unique approach for core migration, whichuses two very important parameters, namely, depth of a core tree and pseudo diameterof a core. One noteworthy point from the viewpoint of fault tolerance is that the degreeof fault-tolerance can be enhanced from covering single point-failure to any number ofcore failures.
APA, Harvard, Vancouver, ISO, and other styles
30

Smeda, Adel Abdullah. "Application of the Hopfield neural network in routing for computer networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39701.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Holtje, Carl. "Security in serverless network environments /." Link to online version, 2004. https://ritdml.rit.edu/dspace/handle/1850/439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lee, Anna H. "Simplified random network codes for multicast networks." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33306.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 43).
Network coding is a method of data transmission across a network which involves coding at intermediate nodes. Network coding is particularly attractive for multicast. Building on the work done on random linear network codes, we develop a constrained, simplified code construction suitable for multicast in wireless networks. We analyze bounds on sufficient code size and code success probability via an algebraic framework for network coding. We also present simulation results that compare generalized random network codes with our code construction. Issues unique to the simplified code are explored and a relaxation of the code to improve code performance is discussed.
by Anna H. Lee.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
33

Shi, Xiaomeng Ph D. Massachusetts Institute of Technology. "Energy aware network coding in wireless networks." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78533.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 97-104).
Energy is one of the most important considerations in designing reliable low-power wireless communication networks. We focus on the problem of energy aware network coding. In particular, we investigate practical energy efficient network code design for wireless body area networks (WBAN). We first consider converge-cast in a star-shaped topology, in which a central base station (BS), or hub, manages and communicates directly with a set of nodes. We then consider a wireless-relay channel, in which a relay node assists in the transmission of data from a source to a destination. This wireless relay channel can be seen as a simplified extended star network, where nodes have relay capabilities. The objective is to investigate the use of network coding in these scenarios, with the goal of achieving reliability under low-energy and lower-power constraints. More specifically, in a star network, we propose a simple network layer protocol, study the mean energy to complete uploads of given packets from the nodes to the BS using a Markov chain model, and show through numerical examples that when reception energy is taken into account, the incorporation of network coding offers reductions in energy use. The amount of achievable gains depends on the number of nodes in the network, the degree of asymmetry in channel conditions experienced by different nodes, and the relative difference between transmitting and receiving power at the nodes. We also demonstrate the compatibility of the proposed scheme with the IEEE 802.15.6 WBAN standard by describing ways of incorporating network coding into systems compliant to the standard. For a wireless relay channel, we explore the strategic use of network coding according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. Although we do not attempt to explicitly categorize the optimal network coding strategies in the relay channel under different system parameters, we provide a framework for deciding whether and where to code, taking into account of throughput maximization and energy depletion constraints.
by Xiaomeng Shi.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
34

You, Bo. "Hub-Network for Distance Computation in Large Social Networks." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1412601464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wells, Daniel David. "Network management for community networks." Thesis, Rhodes University, 2010. http://hdl.handle.net/10962/d1006587.

Full text
Abstract:
Community networks (in South Africa and Africa) are often serviced by limited bandwidth network backhauls. Relative to the basic needs of the community, this is an expensive ongoing concern. In many cases the Internet connection is shared among multiple sites. Community networks may also have a lack of technical personnel to maintain a network of this nature. Hence, there is a demand for a system which will monitor and manage bandwidth use, as well as network use. The proposed solution for community networks and the focus within this dissertation, is a system of two parts. A Community Access Point (CAP) is located at each site within the community network. This provides the hosts and servers at that site with access to services on the community network and the Internet, it is the site's router. The CAP provides a web based interface (CAPgui) which allows configuration of the device and viewing of simple monitoring statistics. The Access Concentrator (AC) is the default router for the CAPs and the gateway to the Internet. It provides authenticated and encrypted communication between the network sites. The AC performs several monitoring functions, both for the individual sites and for the upstream Internet connection. The AC provides a means for centrally managing and effectively allocating Internet bandwidth by using the web based interface (ACgui). Bandwidth use can be allocated per user, per host and per site. The system is maintainable, extendable and customisable for different network architectures. The system was deployed successfully to two community networks. The Centre of Excellence (CoE) testbed network is a peri-urban network deployment whereas the Siyakhula Living Lab (SLL) network is a rural deployment. The results gathered conclude that the project was successful as the deployed system is more robust and more manageable than the previous systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Kulkarni, Shrinivas Bhalachandra. "The simulation studies on a behaviour based trust routing protocol for ad hoc networks." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chaltseva, Anna. "Network state estimation in wireless multi-hop networks." Licentiate thesis, Luleå tekniska universitet, Datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26357.

Full text
Abstract:
Multi-hop wireless networks in general and those built upon IEEE 802.11 standard in particular are known for their highly dynamic and unstable performance. The commonly accepted way for improving the situation is to jointly optimize the performance of protocols across different communications layers. Being able to characterize a state of the network is essential to enable the cross-layer optimization. This licentiate thesis investigates methods for passive characterization of network state at medium access control and transport layers based on information accessible from the corresponding layers below.Firstly, the thesis investigates a possibility for characterizing traffic intensity relying solely on the statistics of measurements from the physical layer. An advantage of this method is that it does not require decoding of the captured packets, by this accounting for the effect from long-range interferences introduced by transmissions at the border of the communication range of a receiver.Secondly, a question of predicting TCP throughput over a multi-hop wireless path is addressed. The proposed predictor is a practically usable function of statistically significant parameters at transport, medium access control and physical communication layers. The presented model is able to predict the TCP throughput with 99% accuracy, which provides an essential input for various cross-layer optimization processes.Finally, during the course of the experimental work the issues of accuracy of simulation-based modeling of communication processes were investigated. The thesis is concluded by presenting a comparative study of the performance characteristics measured in a single channel multi-hop wireless network test-bed and the corresponding measurements obtained from popular network simulators ns-2 and ns-3 when configured with identical settings. The thesis presents the evaluation of the mismatch between the results obtained in the test-bed and the simulators with their standard empirical radio models.
Godkänd; 2012; 20111215 (anncha); LICENTIATSEMINARIUM Ämnesområde: Datorkommunikation/Computer Networking Examinator: Docent Evgeny Osipov, Institutionen för system- och rymdteknik, Luleå tekniska universitet Diskutant: Universitetslektor Olov Schelén, Luleå tekniska universitet samt Xarepo AB, Luleå Tid: Måndag den 20 februari 2012 kl 10.00 Plats: A1514 Demostudion, Luleå tekniska universitet
APA, Harvard, Vancouver, ISO, and other styles
38

Ali, Dana, and Goran Kap. "Statistical Analysis of Computer Network Security." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-129948.

Full text
Abstract:
In this thesis it isshown how to measure the annual loss expectancy of computer networks due to therisk of cyber attacks. With the development of metrics for measuring theexploitation difficulty of identified software vulnerabilities, it is possibleto make a measurement of the annual loss expectancy for computer networks usingBayesian networks. To enable the computations, computer net-work vulnerabilitydata in the form of vulnerability model descriptions, vulnerable dataconnectivity relations and intrusion detection system measurements aretransformed into vector based numerical form. This data is then used to generatea probabilistic attack graph which is a Bayesian network of an attack graph.The probabilistic attack graph forms the basis for computing the annualizedloss expectancy of a computer network. Further, it is shown how to compute anoptimized order of vulnerability patching to mitigate the annual lossexpectancy. An example of computation of the annual loss expectancy is providedfor a small invented example network
APA, Harvard, Vancouver, ISO, and other styles
39

Montgomery, Michael Charles. "Managing complexity in large-scale networks via flow and network aggregation /." Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Dahal, Subharoj. "Implementing a redundant solution to improve network system reliability at XYZ Hospital." Online version, 2009. http://www.uwstout.edu/lib/thesis/2009/2009dahals.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Zhao, Xiaogeng. "An adaptive approach for optimized opportunistic routing over Delay Tolerant Mobile Ad hoc Networks." Thesis, Rhodes University, 2008. http://hdl.handle.net/10962/d1004822.

Full text
Abstract:
This thesis presents a framework for investigating opportunistic routing in Delay Tolerant Mobile Ad hoc Networks (DTMANETs), and introduces the concept of an Opportunistic Confidence Index (OCI). The OCI enables multiple opportunistic routing protocols to be applied as an adaptive group to improve DTMANET routing reliability, performance, and efficiency. The DTMANET is a recently acknowledged networkarchitecture, which is designed to address the challenging and marginal environments created by adaptive, mobile, and unreliable network node presence. Because of its ad hoc and autonomic nature, routing in a DTMANET is a very challenging problem. The design of routing protocols in such environments, which ensure a high percentage delivery rate (reliability), achieve a reasonable delivery time (performance), and at the same time maintain an acceptable communication overhead (efficiency), is of fundamental consequence to the usefulness of DTMANETs. In recent years, a number of investigations into DTMANET routing have been conducted, resulting in the emergence of a class of routing known as opportunistic routing protocols. Current research into opportunistic routing has exposed opportunities for positive impacts on DTMANET routing. To date, most investigations have concentrated upon one or other of the quality metrics of reliability, performance, or efficiency, while some approaches have pursued a balance of these metrics through assumptions of a high level of global knowledge and/or uniform mobile device behaviours. No prior research that we are aware of has studied the connection between multiple opportunistic elements and their influences upon one another, and none has demonstrated the possibility of modelling and using multiple different opportunistic elements as an adaptive group to aid the routing process in a DTMANET. This thesis investigates OCI opportunities and their viability through the design of an extensible simulation environment, which makes use of methods and techniques such as abstract modelling, opportunistic element simplification and isolation, random attribute generation and assignment, localized knowledge sharing, automated scenario generation, intelligent weight assignment and/or opportunistic element permutation. These methods and techniques are incorporated at both data acquisition and analysis phases. Our results show a significant improvement in all three metric categories. In one of the most applicable scenarios tested, OCI yielded a 31.05% message delivery increase (reliability improvement), 22.18% message delivery time reduction (performance improvement), and 73.64% routing depth decrement (efficiency improvement). We are able to conclude that the OCI approach is feasible across a range of scenarios, and that the use of multiple opportunistic elements to aid decision-making processes in DTMANET environments has value.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhou, Yuanyuan. "Energy-efficient protocols and topologies for sensor and personal-area networks." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Dissertations/Summer2007/y_zhou_072307.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Malmgren, Göran. "Network Planning of Single Frequency Broadcasting Networks." Licentiate thesis, KTH, Signaler, sensorer och system, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-28559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ahmed, Ebad. "Delay gains from network coding in wireless networks." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41253.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 73-77).
We consider a single-hop cellular wireless system with a single source (base station) broadcasting a stream of incoming files to multiple receivers over stochastic time-varying channels with non-zero erasure probabilities. The base station charges a price per receiver per file with the aim of maximizing its profit. Customers who wish to transmit files to the receivers decide to enter the system based on the price, the queuing delay, and the utility derived from the transaction. We look at network coding and scheduling as possible strategies for file transmission, and obtain approximate characterizations of the optimal customer admission rate, optimal price and the optimal base-station profit as functions of the first and second moments of the service time processes under mild assumptions. We show that network coding leads to significant gains in the base station profits as compared to scheduling, and also demonstrate that the optimal network coding window size is highly insensitive to the number of receivers, which suggests that pricing and coding decisions can be decoupled. We also investigate the behavior of network coding in the case where the number of receivers is sufficiently large, and derive scaling laws for the asymptotic gains from network coding. We subsequently propose a way to extend our analysis of single-source, multiple-receiver systems to multiple-source, multiple-receiver systems in general network topologies and obtain explicit characterizations of the file download completion time under network coding and scheduling, also taking into account the effects of collisions and interference among concurrent packet transmissions by two or more sources.
(cont.) Our formulation allows us to model multi-hop networks as a series of single-hop multiple-source, multiple-receiver systems, which provides a great deal of insight into the workings of larger and denser multi-hop networks such as overlay networks and peer-to-peer systems, and appears to be a promising application of network coding in such networks in the future.
by Ebad Ahmed.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
45

Kulakov, Anton. "Multiprocessing neural network simulator." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/348420/.

Full text
Abstract:
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
APA, Harvard, Vancouver, ISO, and other styles
46

Schuhart, Russell G. "Hacking social networks examining the viability of using computer network attack against social networks." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FSchuhart.pdf.

Full text
Abstract:
Thesis (M.S. in Information Systems and Operations)--Naval Postgraduate School, March 2007.
Thesis Advisor(s): David Tucker. "March 2007." Includes bibliographical references (p. 55-56). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Amoudi, Ahmed. "Evaluation of virtual routing appliances as routers in a virtual environment /." Online version of thesis, 2008. http://hdl.handle.net/1850/7544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Bo Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Sharing network measurements on peer-to-peer networks." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2007. http://handle.unsw.edu.au/1959.4/40762.

Full text
Abstract:
With the extremely rapid development of the Internet in recent years, emerging peer-to-peer network overlays are meeting the requirements of a more sophisticated communications environment, providing a useful substrate for applications such as scalable file sharing, data storage, large-scale multicast, web-cache, and publish-subscribe services. Due to its design flexibility, peer-to-peer networks can offer features including self-organization, fault-tolerance, scalability, load-balancing, locality and anonymity. As the Internet grows, there is an urgent requirement to understand real-time network performance degradation. Measurement tools currently used are ping, traceroute and variations of these. SNMP (Simple Network Management Protocol) is also used by network administrators to monitor local networks. However, ping and traceroute can only be used temporarily, SNMP can only be deployed at certain points in networks and these tools are incapable of sharing network measurements among end-users. Due to the distributed nature of networking performance data, peer-to-peer overlay networks present an attractive platform to distribute this information among Internet users. This thesis aims at investigating the desirable locality property of peer-to-peer overlays to create an application to share Internet measurement performance. When measurement data are distributed amongst users, it needs to be localized in the network allowing users to retrieve it when external Internet links fail. Thus, network locality and robustness are the most desirable properties. Although some unstructured overlays also integrate locality in design, they fail to reach rarely located data items. Consequently, structured overlays are chosen because they can locate a rare data item deterministically and they can perform well during network failures. In structured peer-to-peer overlays, Tapestry, Pastry and Chord with proximity neighbour selection, were studied due to their explicit notion of locality. To differentiate the level of locality and resiliency in these protocols, P2Psim simulations were performed. The results show that Tapestry is the more suitable peer-to-peer substrate to build such an application due to its superior localizing data performance. Furthermore, due to the routing similarity between Tapestry and Pastry, an implementation that shares network measurement information was developed on freepastry, verifying the application feasibility. This project also contributes to the extension of P2Psim to integrate with GT-ITM and link failures.
APA, Harvard, Vancouver, ISO, and other styles
49

Shafie, Sharil Idzwan. "Active modules of bipartite metabolic network." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8635/.

Full text
Abstract:
The thesis investigates the problem of identifying active modules of bipartite metabolic network. We devise a method of motif projection, and the extraction of clusters from active modules based on the concentration of active-motifs in the network. Our results reveal the existence of hierarchical structure. We model regulation of metabolism as an interaction between a metabolic network and a gene regulatory network in the form of interconnected network. We devise two module detection algorithms for interconnected network to evaluate the molecular changes of activity that are associated with cellular responses. The first module detection algorithm is formulated based on information map of random walks that is capable of inferring modules based on topological and activity of nodes. The proposed algorithm has faster execution time and produces comparably close performance as previous work. The second algorithm takes into account of strong regulatory activities in the gene regulatory layer to support the active regions in the metabolic layer. The integration of gene information allows the formation of large modules with better recall. In conclusion, our findings indicate the importance of no longer modelling complex biological systems as a single network, but to view them as flow of information of multiple molecular spaces.
APA, Harvard, Vancouver, ISO, and other styles
50

Umeh, Njideka Adaku. "Security architecture methodology for large net-centric systems." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.mst.edu/thesis/Umeh_09007dcc8049b3f0.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed December 6, 2007) Includes bibliographical references (p. 60-63).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography