Dissertations / Theses on the topic 'Network architectures'

To see the other types of publications on this topic, follow the link: Network architectures.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Network architectures.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Seah, Peng Leong Chung Wai Kong. "Architectures for device aware network /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Mar%5FSeah.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chung, Wai Kong. "Architectures for device aware network." Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2306.

Full text
Abstract:
In today's heterogeneous computing environment, a wide variety of computing devices with varying capabilities need to access information in the network. Existing network is not able to differentiate the different device capabilities, and indiscriminatingly send information to the end-devices, without regard to the ability of the end-devices to use the information. The goal of a device-aware network is to match the capability of the end-devices to the information delivered, thereby optimizing the network resource usage. In the battlefield, all resources - including time, network bandwidth and battery capacity - are very limited. A device-aware network avoids the waste that happens in current, device-ignorant networks. By eliminating unusable traffic, a device-aware network reduces the time the end-devices spend receiving extraneous information, and thus saves time and conserves battery-life. In this thesis, we evaluated two potential DAN architectures, Proxy-based and Router-based approaches, based on the key requirements we identified. To demonstrate the viability of DAN, we built a prototype using a hybrid of the two architectures. The key elements of our prototype include a DAN browser, a DAN Lookup Server and DAN Processing Unit (DPU). We have demonstrated how our architecture can enhance the overall network utility by ensuring that only appropriate content is delivered to the end-devices.
APA, Harvard, Vancouver, ISO, and other styles
3

Newton, Todd A., Myron L. Moodie, Ryan J. Thibodeaux, and Maria S. Araujo. "Network System Integration: Migrating Legacy Systems into Network-Based Architectures." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604308.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
The direction of future data acquisition systems is rapidly moving toward a network-based architecture. There is a handful of these network-based flight test systems already operating, and the current trend is catching on all over the flight test community. As vendors are churning out a whole new product line for networking capabilities, system engineers are left asking, "What do I do with all of this non-networked, legacy equipment?" Before overhauling an entire test system, one should look for a way to incorporate the legacy system components into the modern network architecture. Finding a way to integrate the two generations of systems can provide substantial savings in both cost and application development time. This paper discusses the advantages of integrating legacy equipment into a network-based architecture with examples from systems where this approach was utilized.
APA, Harvard, Vancouver, ISO, and other styles
4

Al-Azez, Zaineb Talib Saeed. "Optimised green IoT network architectures." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/22224/.

Full text
Abstract:
The work in this thesis proposes a number of energy efficient architectures of IoT networks. These proposed architectures are edge computing, Passive Optical Network (PON) and Peer to Peer (P2P) based architectures. A framework was introduced for virtualising edge computing assisted IoT. Two mixed integer linear programming (MILP) models and heuristics were developed to minimise the power consumption and to maximise the number of served IoT processing tasks. Further consideration was also given to the limited IoT processing capabilities and hence the potential of processing task blockage. Two placement scenarios were studied revealing that the optimal distribution of cloudlets achieved 38% power saving compared to placing the cloudlet in the gateway while gateway placement can save up to 47% of the power compared to the optimal placement but blocked 50% of the total IoT object requests. The thesis also investigated the impact of PON deployment on the energy efficiency of IoT networks. A MILP model and a heuristic were developed to optimally minimise the power consumption of the proposed network. The results of this investigation showed that packing most of the VMs in OLT at a low traffic reduction percentage and placing them in relays at high traffic reduction rate saved power Also, the results revealed that utilising energy efficient PONs and serving heterogeneous VMs can save up to 19% of the total power. Finally, the thesis investigated a peer-to-peer (P2P) based architecture for IoT networks with fairness and incentives. It considered three VM placement scenarios and developed MILP models and heuristics to maximise the number of processing tasks served by VMs and to minimise the total power consumption of the proposed network. The results showed that the highest service rate was achieved by the hybrid scenario which consumes the highest amount of power compared to other scenarios.
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Huanyang. "SOCIAL NETWORK ARCHITECTURES AND APPLICATIONS." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/470889.

Full text
Abstract:
Computer and Information Science
Ph.D.
Rather than being randomly wired together, the components of complex network systems are recently reported to represent a scale-free architecture, in which the node degree distribution follows power-law. While social networks are scale-free, it is natural to utilize their structural properties in some social network applications. As a result, this dissertation explores social network architectures, and in turn, leverages these architectures to facilitate some influence and information propagation applications. Social network architectures are analyzed in two different aspects. The first aspect focuses on the node degree snowballing effects (i.e., degree growth effects) in social networks, which is based on an age-sensitive preferential attachment model. The impact of the initial links is explored, in terms of accelerating the node degree snowballing effects. The second aspect focuses on Nested Scale-Free Architectures (NSFAs) for social networks. The scale-free architecture is a classic concept, which means that the node degree distribution follows the power-law distribution. `Nested' indicates that the scale-free architecture is preserved when low-degree nodes and their associated connections are iteratively removed. NSFA has a bounded hierarchy. Based on the social network structure, this dissertation explores two influence propagation applications for the Social Influence Maximization Problem (SIMP). The first application is a friend recommendation strategy with the perspective of social influence maximization. For the system provider, the objective is to recommend a fixed number of new friends to a given user, such that the given user can maximize his/her social influence through making new friends. This problem is proved to be NP-hard by reduction from the SIMP. A greedy friend recommendation algorithm with an approximation ratio of $1-e^{-1}$ is proposed. The second application studies the SIMP with the crowd influence, which is NP-hard, monotone, non-submodular, and inapproximable in general graphs. However, since user connections in Online Social Networks (OSNs) are not random, approximations can be obtained by leveraging the structural properties of OSNs. The modularity, denoted by $\Delta$, is proposed to measure to what degree this problem violates the submodularity. Two approximation algorithms are proposed with ratios of $\frac{1}{\Delta+2}$ and $1-e^{-1/(\Delta+1)}$, respectively. Beside the influence propagation applications, this dissertation further explores three different information propagation applications. The first application is a social network quarantine strategy, which can eliminate epidemic outbreaks with minimal isolation costs. This problem is NP-hard. An approximation algorithm with a ratio of 2 is proposed through utilizing the problem properties of feasibility and minimality. The second application is a rating prediction scheme, called DynFluid, based on the fluid dynamics. DynFluid analogizes the rating reference among the users in OSNs to the fluid flow among containers. The third application is an information cascade prediction framework: given the social current cascade and social topology, the number of propagated users at a future time slot is predicted. To reduce prediction time complexities, the spatiotemporal cascade information (a larger size of data) is decomposed to user characteristics (a smaller size of data) for subsequent predictions. All these three applications are based on the social network structure.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Armstrong, James R. "Boolean weightless neural network architectures." Thesis, University of Central Lancashire, 2011. http://clok.uclan.ac.uk/5325/.

Full text
Abstract:
A collection of hardware weightless Boolean elements has been developed. These form fundamental building blocks which have particular pertinence to the field of weightless neural networks. They have also been shown to have merit in their own right for the design of robust architectures. A major element of this is a collection of weightless Boolean sum and threshold techniques. These are fundamental building blocks which can be used in weightless architectures particularly within the field of weightless neural networks. Included in these is the implementation of L-max also known as N point thresholding. These elements have been applied to design a Boolean weightless hardware version of Austin’s ADAM neural network. ADAM is further enhanced by the addition of a new learning paradigm, that of non-Hebbian Learning. This new method concentrates on the association of ‘dis-similarity’, believing this is as important as areas of similarity. Image processing using hardware weightless neural networks is investigated through simulation of digital filters using a Type 1 Neuroram neuro-filter. Simulations have been performed using MATLAB to compare the results to a conventional median filter. Type 1 Neuroram has been tested on an extended collection of noise types. The importance of the threshold has been examined and the effect of cascading both types of filters was examined. This research has led to the development of several novel weightless hardware elements that can be applied to image processing. These patented elements include a weightless thermocoder and two weightless median filters. These novel robust high speed weightless filters have been compared with conventional median filters. The robustness of these architectures has been investigated when subjected to accelerated ground based generated neutron radiation simulating the atmospheric radiation spectrum experienced at commercial avionic altitudes. A trial investigating the resilience of weightless hardware Boolean elements in comparison to standard weighted arithmetic logic is detailed, examining the effects on the operation of the function when implemented on hardware experiencing high energy neutron bombardment induced single event effects. Further weightless Boolean elements are detailed which contribute to the development of a weightless implementation of the traditionally weighted self ordered map.
APA, Harvard, Vancouver, ISO, and other styles
7

Tham, Kevin Wen Kaye. "Developing security services for network architectures." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16546/.

Full text
Abstract:
In the last 15 years, the adoption of enterprise level data networks had increased dramatically. This is mainly due to reasons, such as better use of IT resources, and even better coordination between departments and business units. These great demands have fuelled the push for better and faster connectivity to and from these networks, and even within the networks. We have moved from the slow 10Mbps to 1Gbps connectivity for end-point connections and moved from copper-based ISDN to fibre-linked connections for enterprise connections to the Internet. We now even include wireless network technologies in the mix, because of the greater convenience it offers. Such rapid progress is accompanied by ramifications, especially if not all aspects of networking technologies are improved linearly. Since the 1960s and 1970s, the only form of security had been along the line of authentication and authorisation. This is because of the widely used mainframes in that era. When the Internet and, ultimately, the wide-spread use of the Internet influxed in the 1980s, network security was born, and it was not until the late 1980s that saw the first Internet Worm that caused damage to information and systems on the Internet. Fast forward to today, and we see that although we have come a long way in terms of connectivity (connect to anywhere, and anytime, from anywhere else), the proposed use of network security and network security methods have not improved very much. Microsoft Windows XP recently switched from using their own authentication method, to the use of Kerberos, which was last revised 10 years ago. This thesis describes the many problems we face in the world of network security today, and proposes several new methods for future implementation, and to a certain extend, modification to current standards to encompass future developments. Discussion will include a proposed overview of what a secure network architecture should include, and this will lead into several aspects that can be improved on. All problems identified in this thesis have proposed solutions, except for one. The critical flaw found in the standard IEEE802.11 wireless technology was discovered during the course of this research. This flaw is explained and covered in great detail, and also, an explanation is given as to why this critical flaw is not fixable.
APA, Harvard, Vancouver, ISO, and other styles
8

Milosavljevic, Milos. "Integrated wireless-PON access network architectures." Thesis, University of Hertfordshire, 2011. http://hdl.handle.net/2299/6371.

Full text
Abstract:
Next generation access networks should be able to cultivate the ongoing evolution in services and applications. Advancements on that front are expected to exhibit the transformation of high definition television (HDTV) and 2D services into ultra-HDTV and individual interactive 3D services. Currently deployed passive optical networks (PONs) have been certified to be able to deliver high quality video and internet services while in parallel broadband wireless standards are increasing their spectral efficiency and subscriber utilisation. Exploiting the benefits of both by providing an integrated infrastructure benefiting from the wireless mobility and ease of scalability and escalating bandwidth of next generation PONs are expected to offer service providers the business models justifying the evolved services. In this direction, this thesis deals with the means of transparent routing of standard worldwide interoperability for microwave access (WiMAX) signal formats over legacy PONs to and from wireless end users based on radio over fibre (RoF). The concept of frequency division multiplexing (FDM) with RoF is used for efficient addressing of individual base stations, bandwidth on-demand provisioning across a cell/sector, simple remote radio heads and no interference with the baseband PON spectrum. Network performance evaluation, initially through simulation, has displayed, in the presence of optical non-linearites and multi-path wireless channels, standard error vector magnitudes (EVMs) at remote radio receivers and bit error rates (BERs) of 1E-4 for typical WiMAX rates bidirectionally. To provide enhanced scalability and dynamicity, a newly applied scheme based on extended wavelength band overlay over the splitter, wireless-enabled PONs has been progressively investigated. This allows for the routing of multiple FDM windows to different wavelengths resulting in significantly reduced optical and electrical component costs and no dispersion compensation over the fibre. This has been implemented through the application of a dense array wave guide grating (AWG) and tuneable filter in the optical line terminal (OLT) and optical network unit/base stations (ONU/BSs) respectively. Although with the use of a splitter the distribution point of the optical network remains largely the same, vertical cavity surface emitting laser (VCSEL) arrays provide colourless upstream transmission. In addition, an overlapping cell concept is developed and adopted for increased wireless spectral efficiency and resilience. Finally, an experimental test-bed using commercially available WiMAX transceivers was produced, which enabled repetition of the simulation outcomes and therefore confirmed the overall network performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Crowley, Patrick. "Design and analysis of architectures for programmable network processing systems /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Poluri, Pavan Kamal Sudheendra. "Fault Tolerant Network-on-Chip Router Architectures for Multi-Core Architectures." Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/338752.

Full text
Abstract:
As the feature size scales down to deep nanometer regimes, it has enabled the designers to fabricate chips with billions of transistors. The availability of such abundant computational resources on a single chip has made it possible to design chips with multiple computational cores, resulting in the inception of Chip Multiprocessors (CMPs). The widespread use of CMPs has resulted in a paradigm shift from computation-centric architectures to communication-centric architectures. With the continuous increase in the number of cores that can be fabricated on a single chip, communication between the cores has become a crucial factor in its overall performance. Network-on-Chip (NoC) paradigm has evolved into a standard on-chip interconnection network that can efficiently handle the strict communication requirements between the cores on a chip. The components of an NoC include routers, that facilitate routing of data between multiple cores and links that provide raw bandwidth for data traversal. While diminishing feature size has made it possible to integrate billions of transistors on a chip, the advantage of multiple cores has been marred with the waning reliability of transistors. Components of an NoC are not immune to the increasing number of hard faults and soft errors emanating due to extreme miniaturization of transistor sizes. Faults in an NoC result in significant ramifications such as isolation of healthy cores, deadlock, data corruption, packet loss and increased packet latency, all of which have a severe impact on the performance of a chip. This has stimulated the need to design resilient and fault tolerant NoCs. This thesis handles the issue of fault tolerance in NoC routers. Within the NoC router, the focus is specifically on the router pipeline that is responsible for the smooth flow of packets. In this thesis we propose two different fault tolerant architectures that can continue to operate in the presence of faults. In addition to these two architectures, we also propose a new reliability metric for evaluating soft error tolerant techniques targeted towards the control logic of the NoC router pipeline. First, we present Shield, a fault tolerant NoC router architecture that is capable of handling both hard faults and soft errors in its pipeline. Shield uses techniques such as spatial redundancy, exploitation of idle resources and bypassing a faulty resource to achieve hard fault tolerance. The use of these techniques reveals that Shield is six times more reliable than baseline-unprotected router. To handle soft errors, Shield uses selective hardening technique that includes hardening specific gates of the router pipeline to increase its soft error tolerance. To quantify soft error tolerance improvement, we propose a new metric called Soft Error Improvement Factor (SEIF) and use it to show that Shield’s soft error tolerance is three times better than that of the baseline-unprotected router. Then, we present Soft Error Tolerant NoC Router (STNR), a low overhead fault tolerating NoC router architecture that can tolerate soft errors in the control logic of its pipeline. STNR achieves soft error tolerance based on the idea of dual execution, comparison and rollback. It exploits idle cycles in the router pipeline to perform redundant computation and comparison necessary for soft error detection. Upon the detection of a soft error, the pipeline is rolled back to the stage that got affected by the soft error. Salient features of STNR include high level of soft error detection, fault containment and minimum impact on latency. Simulations show that STNR has been able to detect all injected single soft errors in the router pipeline. To perform a quantitative comparison between STNR and other existing similar architectures, we propose a new reliability metric called Metric for Soft error Tolerance (MST) in this thesis. MST is unique in the aspect that it encompasses four crucial factors namely, soft error tolerance, area overhead, power overhead and pipeline latency overhead into a single metric. Analysis using MST shows that STNR provides better reliability while incurring low overhead compared to existing architectures.
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Andrew Sui Tin. "Design of future optical ring network architectures." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Paolino, Carmine. "Large-scale Network Analysis on Distributed Architectures." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amslaurea.unibo.it/1966/.

Full text
Abstract:
Questa dissertazione esamina le sfide e i limiti che gli algoritmi di analisi di grafi incontrano in architetture distribuite costituite da personal computer. In particolare, analizza il comportamento dell'algoritmo del PageRank così come implementato in una popolare libreria C++ di analisi di grafi distribuiti, la Parallel Boost Graph Library (Parallel BGL). I risultati qui presentati mostrano che il modello di programmazione parallela Bulk Synchronous Parallel è inadatto all'implementazione efficiente del PageRank su cluster costituiti da personal computer. L'implementazione analizzata ha infatti evidenziato una scalabilità negativa, il tempo di esecuzione dell'algoritmo aumenta linearmente in funzione del numero di processori. Questi risultati sono stati ottenuti lanciando l'algoritmo del PageRank della Parallel BGL su un cluster di 43 PC dual-core con 2GB di RAM l'uno, usando diversi grafi scelti in modo da facilitare l'identificazione delle variabili che influenzano la scalabilità. Grafi rappresentanti modelli diversi hanno dato risultati differenti, mostrando che c'è una relazione tra il coefficiente di clustering e l'inclinazione della retta che rappresenta il tempo in funzione del numero di processori. Ad esempio, i grafi Erdős–Rényi, aventi un basso coefficiente di clustering, hanno rappresentato il caso peggiore nei test del PageRank, mentre i grafi Small-World, aventi un alto coefficiente di clustering, hanno rappresentato il caso migliore. Anche le dimensioni del grafo hanno mostrato un'influenza sul tempo di esecuzione particolarmente interessante. Infatti, si è mostrato che la relazione tra il numero di nodi e il numero di archi determina il tempo totale.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Yu Ph D. Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science. "Exploring neural network architectures for acoustic modeling." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113981.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 121-132).
Deep neural network (DNN)-based acoustic models (AMs) have significantly improved automatic speech recognition (ASR) on many tasks. However, ASR performance still suffers from speaker and environment variability, especially under low-resource, distant microphone, noisy, and reverberant conditions. The goal of this thesis is to explore novel neural architectures that can effectively improve ASR performance. In the first part of the thesis, we present a well-engineered, efficient open-source framework to enable the creation of arbitrary neural networks for speech recognition. We first design essential components to simplify the creation of a neural network with recurrent loops. Next, we propose several algorithms to speed up neural network training based on this framework. We demonstrate the flexibility and scalability of the toolkit across different benchmarks. In the second part of the thesis, we propose several new neural models to reduce ASR word error rates (WERs) using the toolkit we created. First, we formulate a new neural architecture loosely inspired by humans to process low-resource languages. Second, we demonstrate a way to enable very deep neural network models by adding more non-linearities and expressive power while keeping the model optimizable and generalizable. Experimental results demonstrate that our approach outperforms several ASR baselines and model variants, yielding a 10% relative WER gain. Third, we incorporate these techniques into an end-to-end recognition model. We experiment with the Wall Street Journal ASR task and achieve 10.5% WER without any dictionary or language model, an 8.5% absolute improvement over the best published result.
by Yu Zhang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Krishnamurthy, Likhita. "Comparative Assessment of Network-Centric Software Architectures." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/32377.

Full text
Abstract:
The purpose of this thesis is to characterize, compare and contrast four network-centric software architectures, namely Client-Server Architecture (CSA), Distributed Objects Architecture (DOA), Service-Oriented Architecture (SOA) and Peer-to-Peer Architecture (PPA) and seven associated frameworks consisting of .NET, Java EE, CORBA, DCOM, Web Services, Jini and JXTA with respect to a set of derived criteria. Network-centric systems are gaining in popularity as they have the potential to solve more complex problems than we have been able to in the past. However, with the rise of SOA, Web Services, a set of standards widely used for implementing service-oriented solutions, is being touted as the â silver bulletâ to all problems afflicting the software engineering domain with the danger of making other architectures seem obsolete. Thus, there is an urgent need to study the various architectures and frameworks in comparison to each other and understand their relative merits and demerits for building network-centric systems. The architectures studied here were selected on the basis of their fundamentality and generality. The frameworks were chosen on the basis of their popularity and representativeness to build solutions in a particular architecture. The criteria used for comparative assessment are derived from a combination of two approaches â by a close examination of the unique characteristics and requirements of network-centric systems and then by an examination of the constraints and mechanisms present in the architectures and frameworks under consideration that may contribute towards realizing the requirements of network-centric systems. Not all of the criteria are equally relevant for the architectures and frameworks. Some, when relevant, are relevant in a different sense from one architecture (or framework) to another. One of the conclusions that can be drawn from this study is that the different architectures are not completely different from each other. In fact, CSA, DOA and SOA are a natural evolution in that order and share several characteristics. At the same time, significant differences do exist, so it is clearly possible to judge/differentiate one from the other. All three architectures can coexist in a single system or system of systems. However, the advantages of each architecture become apparent only when they are used in their proper scope. At the same time, a sharp difference can be perceived between these three architectures and the peer-to-peer architecture. This is because PPA aims to solve a totally different class of problems than the other three architectures and hence has certain unique characteristics not observed in the others. Further, all of the frameworks have certain unique architectural features and mechanisms not found in the others that contribute towards achieving network-centric quality characteristics. The two broad frameworks, .NET and Java EE offer almost equivalent capabilities and features; what can be achieved in one can be achieved in the other. This thesis deals with the study of all the four architectures and their related frameworks. The criteria used, while fairly comprehensive, are not exhaustive. Variants of the fundamental architectures are not considered. However, system/software architects seeking an understanding of the tradeoffs involved in using the various architectures and frameworks and their subtle nuances should benefit considerably from this work.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Duan, Xiao. "DSP-enabled reconfigurable optical network devices and architectures for cloud access networks." Thesis, Bangor University, 2018. https://research.bangor.ac.uk/portal/en/theses/dspenabled-reconfigurable-optical-network-devices-and-architectures-for-cloud-access-networks(68eaa57e-f0af-4c67-b1cf-c32cfd2ee00f).html.

Full text
Abstract:
To meet the ever-increasing bandwidth requirements, the rapid growth in highly dynamic traffic patterns, and the increasing complexity in network operation, whilst providing high power consumption efficiency and cost-effectiveness, the approach of combining traditional optical access networks, metropolitan area networks and 4-th generation (4G)/5-th generation (5G) mobile front-haul/back-haul networks into unified cloud access networks (CANs) is one of the most preferred “future-proof” technical strategies. The aim of this dissertation research is to extensively explore, both numerically and experimentally, the technical feasibility of utilising digital signal processing (DSP) to achieve key fundamental elements of CANs from device level to network architecture level including: i) software reconfigurable optical transceivers, ii) DSP-enabled reconfigurable optical add/drop multiplexers (ROADMs), iii) network operation characteristics-transparent digital filter multiple access (DFMA) techniques, and iv) DFMA-based passive optical network (PON) with DSP-enabled software reconfigurability. As reconfigurable optical transceivers constitute fundamental building blocks of the CAN’s physical layer, digital orthogonal filtering-based novel software reconfigurable transceivers are proposed and experimentally and numerically explored, for the first time. By making use of Hilbert-pair-based 32-tap digital orthogonal filters implemented in field programmable gate arrays (FPGAs), a 2GS/s@8-bit digital-to-analogue converter (DAC)/analogue-to-digital converter (ADC), and an electro-absorption modulated laser (EML) intensity modulator (IM), world-first reconfigurable real-time transceivers are successfully experimentally demonstrated in a 25km IMDD SSMF system. The transceiver dynamically multiplexes two orthogonal frequency division multiplexed (OFDM) channels with a total capacity of 3.44Gb/s. Experimental results also indicate that the transceiver performance is fully transparent to various subcarrier modulation formats of up to 64-QAM, and that the maximum achievable transceiver performance is mainly limited by the cross-talk effect between two spectrally-overlapped orthogonal channels, which can, however, be minimised by adaptive modulation of the OFDM signals. For further transceiver optimisations, the impacts of major transceiver design parameters including digital filter tap number and subcarrier modulation format on the transmission performance are also numerically explored. II Reconfigurable optical add/drop multiplexers (ROADMs) are also vital networking devices for application in CANs as they play a critical role in offering fast and flexible network reconfiguration. A new optical-electrical-optical (O-E-O) conversion-free, software-switched flexible ROADM is extensively explored, which is capable of providing dynamic add/drop operations at wavelength, sub-wavelength and orthogonal sub-band levels in software defined networks incorporating the reconfigurable transceivers. Firstly, the basic add and drop operations of the proposed ROADMs are theoretically explored and the ROADM designs are optimised. To crucially validate the practical feasibility of the ROADMs, ROADMs are experimentally demonstrated, for the first time. Experimental results show that the add and drop operation performances are independent of the sub-band signal spectral location and add/drop power penalties are < 2dB. In addition, the ROADMs are also robust against a differential optical power dynamic range of > 2dB and a drop RF signal power range of 7.1dB. In addition to exploring key optical networking devices for CANs, the first ever DFMA PON experimental demonstrations are also conducted, by using two real-time, reconfigurable, OOFDM-modulated optical network units (ONUs) operating on spectrally overlapped multi-Gb/s orthogonal channels, and an offline optical line terminal (OLT). For multipoint-to-point upstream signal transmission over 26km SSMF in an IMDD DFMA PON, experiments show that each ONU achieves a similar upstream BER performance, excellent robustness to inter-ONU sample timing offset (STO) and a large ONU launch power variation range. Given the importance of IMDD DFMA-PON channel frequency response roll-off, both theoretical and experimental explorations are undertaken to investigate the impact of channel frequency response roll-off on the upstream transmission of the DFMA PON system Such work provides valuable insights into channel roll-off-induced performance dependencies to facilitate cost-effective practical network/transceiver/component designs.
APA, Harvard, Vancouver, ISO, and other styles
16

Vangal, Sriram. "Performance and Energy Efficient Network-on-Chip Architectures." Doctoral thesis, Linköpings universitet, Institutionen för systemteknik, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11439.

Full text
Abstract:
The scaling of MOS transistors into the nanometer regime opens the possibility for creating large Network-on-Chip (NoC) architectures containing hundreds of integrated processing elements with on-chip communication. NoC architectures, with structured on-chip networks are emerging as a scalable and modular solution to global communications within large systems-on-chip. NoCs mitigate the emerging wire-delay problem and addresses the need for substantial interconnect bandwidth by replacing today’s shared buses with packet-switched router networks. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as three-dimensional (3D) graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput. This work demonstrates that a computational fabric built using optimized building blocks can provide high levels of performance in an energy efficient manner. The thesis details an integrated 80- Tile NoC architecture implemented in a 65-nm process technology. The prototype is designed to deliver over 1.0TFLOPS of performance while dissipating less than 100W. This thesis first presents a six-port four-lane 57 GB/s non-blocking router core based on wormhole switching. The router features double-pumped crossbar channels and destinationaware channel drivers that dynamically configure based on the current packet destination. This enables 45% reduction in crossbar channel area, 23% overall router area, up to 3.8X reduction in peak channel power, and 7.2% improvement in average channel power. In a 150-nm sixmetal CMOS process, the 12.2 mm2 router contains 1.9-million transistors and operates at 1 GHz at 1.2 V supply. We next describe a new pipelined single-precision floating-point multiply accumulator core (FPMAC) featuring a single-cycle accumulation loop using base 32 and internal carry-save arithmetic, with delayed addition techniques. A combination of algorithmic, logic and circuit techniques enable multiply-accumulate operations at speeds exceeding 3GHz, with singlecycle throughput. This approach reduces the latency of dependent FPMAC instructions and enables a sustained multiply-add result (2FLOPS) every cycle. The optimizations allow removal of the costly normalization step from the critical accumulation loop and conditionally powered down using dynamic sleep transistors on long accumulate operations, saving active and leakage power. In a 90-nm seven-metal dual-VT CMOS process, the 2 mm2 custom design contains 230-K transistors. Silicon achieves 6.2-GFLOPS of performance while dissipating 1.2 W at 3.1 GHz, 1.3 V supply. We finally present the industry's first single-chip programmable teraFLOPS processor. The NoC architecture contains 80 tiles arranged as an 8×10 2D array of floating-point cores and packet-switched routers, both designed to operate at 4 GHz. Each tile has two pipelined singleprecision FPMAC units which feature a single-cycle accumulation loop for high throughput. The five-port router combines 100 GB/s of raw bandwidth with low fall-through latency under 1ns. The on-chip 2D mesh network provides a bisection bandwidth of 2 Tera-bits/s. The 15-FO4 design employs mesochronous clocking, fine-grained clock gating, dynamic sleep transistors, and body-bias techniques. In a 65-nm eight-metal CMOS process, the 275 mm2 custom design contains 100-M transistors. The fully functional first silicon achieves over 1.0TFLOPS of performance on a range of benchmarks while dissipating 97 W at 4.27 GHz and 1.07-V supply. It is clear that realization of successful NoC designs require well balanced decisions at all levels: architecture, logic, circuit and physical design. Our results demonstrate that the NoC architecture successfully delivers on its promise of greater integration, high performance, good scalability and high energy efficiency.
APA, Harvard, Vancouver, ISO, and other styles
17

Suryanarayanan, Deepak. "A Methodology for Study of Network Processing Architectures." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010720-154055.

Full text
Abstract:

A new class of processors has recently emerged that encompasses programmable ASICs and microprocessors that can implement adaptive network services. This class of devices is collectively known as Network Processors (NP). NPs leverage the flexibility of software solutions with the high performance of custom hardware. With the development of such sophisticated hardware, there is a need for a holistic methodology that can facilitate study of Network Processors and their performance with different networking applications and traffic conditions. This thesis describes the development of Component Network Simulator (ComNetSim) that is based on such a tech-nique. The simulator demonstrates the implementation of Diffserv applications on a Network Processor architecture and the performance of the system under different network traffic conditions.

APA, Harvard, Vancouver, ISO, and other styles
18

Overkamp, Ard Andreas Franciscus. "Discrete event control motivated by layered network architectures." [S.l. : [Groningen] : s.n.] ; [University Library Groningen] [Host], 1996. http://irs.ub.rug.nl/ppn/152809341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gomez, E. Ribes. "Wavelet neural network algorithms and architectures : nonlinear modelling." Thesis, Queen's University Belfast, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.273393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dueser, Michael. "Investigation of advanced optical packet-routed network architectures." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.401295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ahmed, Aftab. "Enhancement in network architectures for future wireless systems." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22080/.

Full text
Abstract:
This thesis investigates innovative wireless deployment strategies for dense ultra-small cells networks. In particular, this thesis focuses on improving the resource utilisation, reliability and energy efficiency of future wireless networks by exploiting the existing flexibility in the network architecture. The wireless backhaul configurations and topology management schemes proposed in this thesis consider a dense urban area scenario with static outdoor users. In the first part of this thesis, a novel mm-wave dual-hop backhaul network architecture is investigated for future cellular networks to achieve better resource utilization and user experience at the expense of path diversity available in dense deployment of base stations. The system-level performance is analysed and compared for the backhaul section using mm-wave band. Followed by the performance of the network model which is validated using a Markov Model. The second part of the thesis illustrates a topology management strategy for the same dual-hop backhaul network architecture. The same path diversity is also utilized by the topology management technique to achieve high energy savings and improvement in performance. The results show that the proposed architecture facilitates the topology management process to turn-off some portion of the network in order to minimize the power consumption and can deliver Quality-of-Service guarantee. Finally, the methodology to admit new users into the system, to best control the capacity resource, is investigated for radio resource management in a multi hop, multi-tier heterogeneous network. A novel analytical Markov Model based on a two-dimensional state-transition rate diagram is developed to describe system behaviour of a coexistence scenarios containing two different sets of users, which have full and limited access to the network resources. Different levels of restriction to access the network by specific groups of users are compared and conclusions are drawn.
APA, Harvard, Vancouver, ISO, and other styles
22

Tay, Wee Peng. "Decentralized detection in resource-limited sensor network architectures." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/42910.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 201-207).
We consider the problem of decentralized binary detection in a network consisting of a large number of nodes arranged as a tree of bounded height. We show that the error probability decays exponentially fast with the number of nodes under both a Neyman-Pearson criterion and a Bayesian criterion, and provide bounds for the optimal error exponent. Furthermore, we show that under the Neyman-Pearson criterion, the optimal error exponent is often the same as that corresponding to a parallel configuration, implying that a large network can be designed to operate efficiently without significantly affecting the detection performance. We provide sufficient, as well as necessary, conditions for this to happen. For those networks satisfying the sufficient conditions, we propose a simple strategy that nearly achieves the optimal error exponent, and in which all non-leaf nodes need only send 1-bit messages. We also investigate the impact of node failures and unreliable communications on the detection performance. Node failures are modeled by a Galton-Watson branching process, and binary symmetric channels are assumed for the case of unreliable communications. We characterize the asymptotically optimal detection performance, develop simple strategies that nearly achieve the optimal performance, and compare the performance of the two types of networks. Our results suggest that in a large scale sensor network, it is more important to ensure that nodes can communicate reliably with each other(e.g.,by boosting the transmission power) than to ensure that nodes are robust to failures. In the case of networks with unbounded height, we establish the validity of a long-standing conjecture regarding the sub-exponential decay of Bayesian detection error probabilities in a tandem network. We also provide bounds for the error probability, and show that under the additional assumption of bounded Kullback-Leibler divergences, the error probability is (e cnd ), for all d> 1/2, with c c(logn)d being a positive constant. Furthermore, the bound (e), for all d> 1, holds under an additional mild condition on the distributions. This latter bound is shown to be tight. Moreover, for the Neyman-Pearson case, we establish that if the sensors act myopically, the Type II error probabilities also decay at a sub-exponential rate.
(cont.) Finally, we consider the problem of decentralized detection when sensors have access to side-information that affects the statistics of their measurements, and the network has an overall cost constraint. Nodes can decide whether or not to make a measurement and transmit a message to the fusion center("censoring"), and also have a choice of the transmission function. We study the tradeoff in the detection performance with the cost constraint, and also the impact of sensor cooperation and global sharing of side-information. In particular, we show that if the Type I error probability is constrained to be small, then sensor cooperation is not necessary to achieve the optimal Type II error exponent.
by Wee Peng Tay.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Zoumpoulis, Spyridon Ilias. "Decentralized detection in sensor network architectures with feedback." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/52775.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 73-74).
We investigate a decentralized detection problem in which a set of sensors transmit a summary of their observations to a fusion center, which then decides which one of two hypotheses is true. The focus is on determining the value of feedback in improving performance in the regime of asymptotically many sensors. We formulate the decentralized detection problem for different network configurations of interest under both the Neyman-Pearson and the Bayesian criteria. In a configuration with feedback, the fusion center would make a preliminary decision which it would pass on back to the local sensors; a related configuration, the daisy chain, is introduced: the first fusion center passes the information from a first set of sensors on to a second set of sensors and a second fusion center. Under the Neyman-Pearson criterion, we provide both an empirical study and theoretical results. The empirical study assumes scalar linear Gaussian binary sensors and analyzes asymptotic performance as the signal-to-noise ratio of the measurements grows higher, to show that the value of feeding the preliminary decision back to decision makers is asymptotically negligible. This motivates two theoretical results: first, in the asymptotic regime (as the number of sensors tends to infinity), the performance of the "daisy chain" matches the performance of a parallel configuration with twice as many sensors as the classical scheme; second, it is optimal (in terms of the exponent of the error probability) to constrain all decision rules at the first and second stage of the "daisy chain" to be equal.
(cont.) Under the Bayesian criterion, three analytical results are shown. First, it is asymptotically optimal to have all sensors of a parallel configuration use the same decision rule under exponentially skewed priors. Second, again in the asymptotic regime, the decision rules at the second stage of the "daisy chain" can be equal without loss of optimality. Finally, the same result is proven for the first stage.
by Spyridon Ilias Zoumpoulis.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
24

Belinkov, Yonatan. "Neural network architectures for Prepositional Phrase attachment disambiguation." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91147.

Full text
Abstract:
Thesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
25
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 41-44).
This thesis addresses the problem of Prepositional Phrase (PP) attachment disambiguation, a key challenge in syntactic parsing. In natural language sentences, a PP may often be attached to several possible candidates. While humans can usually identify the correct candidate successfully, syntactic parsers are known to have high error rated on this kind of construction. This work explores the use of compositional models of meaning in choosing the correct attachment location. The compositional model is defined using a recursive neural network. Word vector representations are obtained from large amounts of raw text and fed into the neural network. The vectors are first forward propagated up the network in order to create a composite representation, which is used to score all possible candidates. In training, errors are back-propagated down the network such that the composition matrix is updated from the supervised data. Several possible neural architectures are designed and experimentally tested in both English and Arabic data sets. As a comparative system, we offer a learning-to-rank algorithm based on an SVM classifier which has access to a wide range of features. The performance of this system is compared to the compositional models.
by Yonatan Belinkov.
S.M. in Computer Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
25

Djarallah, Nabil Bachir. "Network architectures for inter-carrier QoS-capable services." Rennes 1, 2011. http://www.theses.fr/2011REN1S099.

Full text
Abstract:
The next challenge for carriers dwells on proposing value-added services to customers of remote carriers. Such services cross several networks and require Quality of Service (QoS) guarantees. But, inter-carrier routing protocols still have some limitations in terms of service assurance. The complexity of setting up such inter-carrier value-added services is due to technical (e. G. Heterogeneity of networks, confidentiality of network topologies, scalability, etc. ), economical and political reasons (e. G. Revenue sharing, inter-carrier cooperation, etc. ). Therefore, to alleviate these fears, we suggest the creation of alliances wherein carriers would agree to cooperate. This does not overcome the challenge we have mentioned, but it relaxes the constraints of confidentiality, scalability, trust, etc. To accompany this approach and enable the establishment of value-added services beyond the boundaries of a single operator, a reflection on the next-generation network architectures, involved protocols and algorithms is essential. In this thesis we present different architectures capable of the deployment of such inter-carrier services, as well as protocol and algorithmic solutions for the inter-carrier service negotiation and the availability of network resources across multiple carriers. We show that the performance of these protocols and algorithms are competitive with respect to other works. Other issues around the inter-carrier path computation, such as inter-carrier route discovery and inter-carrier loop avoidance, are discussed in this thesis
Le challenge pour les opérateurs, aujourd’hui, réside dans la fourniture de services à valeur ajoutée à leurs utilisateurs mais aussi et surtout aux utilisateurs d'autres opérateurs. Ces services traversent plusieurs réseaux et exigent des garanties en termes de Qualité de Service (QoS). Pourtant, les protocoles actuels de routage inter-opérateur présentent encore des limitations en termes de garantie de QoS. La complexité de la mise en place de tels services inter-opérateurs à valeur ajoutée est due à des raisons techniques (par ex. Hétérogénéité des réseaux, passage à l’échelle, confidentialité de la topologie des réseaux de chaque opérateur, dimensionnement des ressources, etc. ), et économiques ou politiques (par ex. Partage des revenus, coopération inter-opérateur). Pour atténuer ces inquiétudes, nous suggérons la création d’alliances d’opérateurs prêts à coopérer. Bien que cela ne réponde pas totalement au challenge évoqué ci-dessus, ces alliances de confiance permettraient de relâcher ces contraintes. Pour accompagner cette démarche et permettre la mise en place de services à valeur ajoutée dépassant les frontières d’un seul opérateur, une réflexion autour des architectures des réseaux de nouvelle génération, des protocoles et des algorithmes associés est indispensable. Dans cette thèse nous présentons différentes architectures permettant le déploiement de tels services, ainsi que des solutions algorithmiques et protocolaires pour la négociation de contrats de services et la vérification de la disponibilité des ressources du réseau traversant plusieurs opérateurs. Nous démontrons que les performances de ces algorithmes sont compétitives par rapport à d’autres travaux
APA, Harvard, Vancouver, ISO, and other styles
26

McLaughlin, Kieran Jude. "Advanced search and sort architectures for network processing." Thesis, Queen's University Belfast, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.485051.

Full text
Abstract:
The research presented in this thesis focuses on high-speed search and sort hardware architectures developed to accelerate key fu~ctions in networks. Specific architectures have been developed and implemented for a tag sort/retrieve circuit, used in a fair queuing scheduler, in addition to a content addressable memory (CAM) used for IP address lookup. A detailed investigation highlights the reasons for using fair queuing in preference to other algorithms in order to deliver Quality of Service (QoS). It is shown that sorting finishing tags is a crucial process in fair queuing. A fundamental analysis oftag retrieval identifies two alternative 'search' and 'sort' models of operation. It is furthermore proven that the sort model is preferable. Further research investigating a range of lookup functions demonstrates that a multi-bit trie approach offers op~imum performance and conforms to the sort model. A scalable, modular architecture for the circuit is presented and results are shown for a 130nm standard cell silicon implementation. The issue ofIP address lookup has been clearly identified as a process of increasing significance, due to constraints such as increasing connection speeds and table size as well as changing traffic profiles. A critical analysis of recent specialised hardware designs for address lookup reveal a number ofimportant design factors that must be taken into account, such as routing updates, speed of search and update, and prefix distribution. The investigation highlights ternary CAMs as performing well against these criteria. Different CAM architectures have been developed and implemented based on the resources available on modern FPGAs. These are based on registers, LUTs and embedded RAM blocks. The results show these designs can be suitable for a range ofsmall to medium sized applications, and their use can extend beyond address lookup into other areas of networking such as packet classification and network security.
APA, Harvard, Vancouver, ISO, and other styles
27

Sadat, Mohammad Nazmus. "QoE-Aware Video Communication in Emerging Network Architectures." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin162766498933367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sezer, Sakir. "An investigation into novel ATM switch architectures." Thesis, Queen's University Belfast, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sarpangala, Kishan. "Semantic Segmentation Using Deep Learning Neural Architectures." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin157106185092304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Kongxun. "Performance optimization with integrated consideration of routing, flow control, and congestion control in packet-switched networks." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/8305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Diana, Gary M. "Internetworking : an analysis and proposal /." Online version of thesis, 1990. http://hdl.handle.net/1850/10605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Becker, Russell W. "A test bed for detection of botnet infections in low data rate tactical networks." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FBecker.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): JMcEachen, John ; Tummala, Murali. "September 2009." Description based on title screen as viewed on November 04, 2009. Author(s) subject terms: Botnet, Tactical Network, BotHunter, Honeynet, Honeypot, Low Data Rate, Network Security Includes bibliographical references (p. 57-59). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
33

Nordström, Erik. "Challenged Networking : An Experimental Study of new Protocols and Architectures." Doctoral thesis, Uppsala universitet, Avdelningen för datorteknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9002.

Full text
Abstract:
With the growth of Internet, the underlying protocols are increasingly challenged by new technologies and applications. The original Internet protocols were, however, not designed for wireless communication, mobility, long disconnection times, and varying bandwidths. In this thesis, we study challenged networking, and how well old and new protocols operate under such constraints. Our study is experimental. We build network testbeds and measure the performance of alternative protocols and architectures. We develop novel methodologies for repeatable experiments that combine emulations, simulations and real world experiments. Based on our results we suggest modifications to existing protocols, and we also develop a new network architecture that matches the constraints of a challenged network, in our case, an opportunistic network. One of our most important contributions is an Ad hoc Protocol Evaluation (APE) testbed. It has been successfully used worldwide. The key to its success is that it significantly lowers the barrier to repeatable experiments involving wireless and mobile computing devices. Using APE, we present side-by-side performance comparisons of IETF MANET routing protocols. A somewhat surprising result is that some ad hoc routing protocols perform a factor 10 worse in the testbed than predicted by a common simulation tool (ns-2). We find that this discrepancy is mainly related to the protocols’ sensing abilities, e.g., how accurately they can infer their neighborhood in a real radio environment. We propose and implement improvements to these protocols based on the results. Our novel network architecture Haggle is another important contribution. It is based on content addressing and searching. Mobile devices in opportunistic networks exchange content whenever they detect each other. We suggest that the exchange should be based on interests and searches, rather than on destination names and addresses. We argue that content binding should be done late in challenged networks, something which our search approach supports well.
APA, Harvard, Vancouver, ISO, and other styles
34

Nguyen, Thanh Vinh. "Content distribution networks over shared infrastructure a paradigm for future content network deployment /." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060509.094632/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chobineh, Amirreza. "Influence of new network architectures and usages on RF human exposure in cellular networks." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT019.

Full text
Abstract:
Dans les années à venir, le trafic de données dans les réseaux cellulaires connaîtra une forte croissance en raison de l'augmentation de l’utilisation des moyens de télécommunications sans fil. Par conséquent, les opérateurs de réseaux mobiles visent à augmenter la capacité et la couverture de leurs réseaux afin de répondre à cette croissance. Dans ce contexte, l'un des principaux efforts est de densifier les réseaux Macrocells actuels par les Small cells afin d’offrir plus de couverture et de débit aux utilisateurs. Les antennes Small cells émettent de faibles puissances et sont souvent déployées à de faibles hauteurs. En conséquence, ils sont plus proches des utilisateurs, peuvent être déployés massivement mais génèrent également des inquiétudes chez les riverains. Les téléphones portables sont utilisés pour une grande variété d'utilisations qui nécessitent différentes quantités de données et de débit. Les applications de voix sur IP telles que Skype sont devenues très populaires car elles fournissent une communication vocale internationale peu onéreuse et peuvent être utilisées sur les appareils mobiles. Étant donné que les systèmes LTE ne prennent en charge que les services par paquets, le service vocal utilise la technologie voix sur LTE au lieu de la technologie « circuits » classique comme avec le GSM et UMTS.L'objectif principal de cette thèse est d'évaluer l'influence des nouvelles architectures des réseaux et usages sur l'exposition actuel du publique induite par les réseaux cellulaires. À cet égard, plusieurs campagnes de mesures ont été menées dans différentes villes et environnements.En ce qui concerne l'exposition aux champs électromagnétiques dans les réseaux hétérogènes, les résultats suggèrent qu'en déployant les Small cells, la puissance émise par le téléphone mobile, en raison de la proximité de l’antenne et par conséquent l’exposition global diminue. De plus, pour évaluer l'exposition EMF des utilisateurs indoor induite par les Small cells, deux modèles statistiques sont proposés pour les expositions montante et descendante induite dans un réseau hétérogène LTE.La dernière partie de cette thèse est consacrée à l'évaluation de l'exposition aux nouveaux types d'usages. Les résultats suggèrent que la puissance émise et la durée d'émission par un téléphone mobile dépendent fortement de l’usage et de la technologie du réseau. Les communications voix nécessitent un débit continu et généralement faible afin de maintenir la communication pendant l'appel. Au contraire, dans le cas de l’usage données, le téléphone mobile nécessite des débits plus élevés pour effectuer la tâche le plus rapidement. Par conséquent, pendant un appel voix, même si l'utilisateur utilise le téléphone portable pendant une durée relativement longue, la durée de l'exposition est moins longue car l'utilisation ne nécessite pas une grande quantité de données. Ainsi, le taux d'occupation temporelle de plusieurs types d'appels voix pour différentes technologies est évalué par les mesures
In coming years, there will be a sharp growth in wireless data traffic due to the increasing usage of mobile phones and implementation of IoT technology. Therefore, mobile network operators aim to increase the capacity of their networks, to provide higher data traffic with lower latency, and to support thousands of connections. One of the primary efforts toward these goals is to densify today's cellular Macrocell networks using Small cells to bring more coverage and higher network capacity. Small cell antennas emit lower power than Macrocells and are often deployed at low heights. As a consequence, they are closer to the user and can be implemented massively. The latter can result in an important raise in public concerns. Mobile phones are used on the one hand, for a large variety of data usages that require different amounts of data and throughput and on the other hand for making phone calls. Voice over IP applications such as Skype has become very popular since they provide cheap international voice communication and can be used on mobile devices. Since LTE systems only support packet services, the voice service uses Voice over LTE technology instead of classical circuit-switched voice technology as in GSM and UMTS. The main objective of this thesis is to characterize and analyse the influence of new network architectures and usages on the actual human exposure induced by cellular networks. In this regard, several measurement campaigns were carried out in various cities and environments. Regarding the EMF exposure in heterogeneous networks, results suggest that by implementing Small cells, the global exposure (i.e. exposure induced by mobile phone and base station antenna) reduces due to the fact that by bringing the antenna closer to the user the emitted power by mobile phone and the usage duration reduce owing to power control schemes implemented in cellular network technologies. However, the magnitude of exposure reduction depends on the location of the Small cell with respect to Macrocells. Moreover, to assess the EMF exposure of indoor users induced by Small cells, two statistical models are proposed for the uplink and downlink exposures in an LTE heterogeneous environment based on measurements. The last part of the thesis was devoted to the assessment of the exposure for new types of usages through measurements. Results suggest that the amount of uplink emitted power and the emission time duration by a mobile phone is highly dependent on the usage and network technology. Voice call communications require a continuous and generally low throughput in order to maintain the communication during the call. On the contrary, in data usage, the mobile phone requires higher data and throughput to perform the task as fast as possible. Therefore during a voice call even if the user is using the mobile phone for a relatively long time, the exposure time duration should be lower since the usage does not require high amounts of data. The temporal occupation rate for several types of voice calls for different technologies is assessed through measurements
APA, Harvard, Vancouver, ISO, and other styles
36

Ganti, Sudhakar N. M. "Access protocols and network architectures for very high-speed optical fiber local area networks." Thesis, University of Ottawa (Canada), 1993. http://hdl.handle.net/10393/6917.

Full text
Abstract:
The single mode optical fiber possesses an enormous bandwidth of more than 30 THz in the low-loss optical region of 1.3 $\mu$m and 1.5 $\mu$m. Through Wavelength Division Multiplexing (WDM), the optical fiber bandwidth can be divided into a set of high-speed channels, where each channnel is assigned its own unique wavelength. An M x M passive optical star coupler is a simple broadcast medium, in which light energy incident at any input is uniformly coupled (or distributed) to all the outputs. Thus, a passive star along with the WDM channels can be used to configure a Local Area Network (LAN). In this LAN, users require tunable devices to access a complete or a partial set of the WDM channels. Due to these multiple channels, many concurrent packet transmissions corresponding to different user pairs are possible and thus the total system throughput can be much higher than the data rates of each individual channel. To fairly arbitrate the data channels among the users, media access protocols are needed. Depending upon the number of data channels and the number of users, two possible situations arise. In the first case, the number of users is much larger than the number of data channels and in the second, the number of users equals to the the number of channels. In both cases, data channel contention may arise if multiple users access the same given channel and must be resolved. This thesis proposes media access protocols for passive optical star networks. All the proposed protocols are slotted in nature, i.e., the time axis on each channel is divided into slots. The well known Slotted-ALOHA and Reservation ALOHA protocols are extended to the multi channel network environment. The thesis also proposes switching protocols (equal number of channels and users), contention-based reservation protocols for this network architecture. To interconnect these star networks, a multi-control channel protocol is also proposed along with two interconnecting techniques. Since there are multiple data channels, the data packets on different channels may be destined to the same user. However, if the user is equipped with only one receiver, the user can receive only one packet and ignores others. This is called a 'receiver collision' and the thesis also studies the effect of these receiver collisions on the data channels. Two network architectures, one for a packet circulating ring network and the other for a circuit switched application are described. Finally, the thesis studies some implementation considerations for these protocols.
APA, Harvard, Vancouver, ISO, and other styles
37

Patel, Kavita Beard Cory. "Alternate optical network architectures SONET or all optical Mesh /." Diss., UMK access, 2004.

Find full text
Abstract:
Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2004.
"A thesis in computer science." Typescript. Advisor: Cory Beard. Vita. Title from "catalog record" of the print edition Description based on contents viewed Feb. 28, 2006. Includes bibliographical references (leaves 117-118). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
38

Al-Daraiseh, Ahmad. "GENETICALLY ENGINEERED ADAPTIVE RESONANCE THEORY (ART) NEURAL NETWORK ARCHITECTURES." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3171.

Full text
Abstract:
Fuzzy ARTMAP (FAM) is currently considered to be one of the premier neural network architectures in solving classification problems. One of the limitations of Fuzzy ARTMAP that has been extensively reported in the literature is the category proliferation problem. That is Fuzzy ARTMAP has the tendency of increasing its network size, as it is confronted with more and more data, especially if the data is of noisy and/or overlapping nature. To remedy this problem a number of researchers have designed modifications to the training phase of Fuzzy ARTMAP that had the beneficial effect of reducing this phenomenon. In this thesis we propose a new approach to handle the category proliferation problem in Fuzzy ARTMAP by evolving trained FAM architectures. We refer to the resulting FAM architectures as GFAM. We demonstrate through extensive experimentation that an evolved FAM (GFAM) exhibits good (sometimes optimal) generalization, small size (sometimes optimal size), and requires reasonable computational effort to produce an optimal or sub-optimal network. Furthermore, comparisons of the GFAM with other approaches, proposed in the literature, which address the FAM category proliferation problem, illustrate that the GFAM has a number of advantages (i.e. produces smaller or equal size architectures, of better or as good generalization, with reduced computational complexity). Furthermore, in this dissertation we have extended the approach used with Fuzzy ARTMAP to other ART architectures, such as Ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (GAM) that also suffer from the ART category proliferation problem. Thus, we have designed and experimented with genetically engineered EAM and GAM architectures, named GEAM and GGAM. Comparisons of GEAM and GGAM with other ART architectures that were introduced in the ART literature, addressing the category proliferation problem, illustrate similar advantages observed by GFAM (i.e, GEAM and GGAM produce smaller size ART architectures, of better or improved generalization, with reduced computational complexity). Moverover, to optimally cover the input space of a problem, we proposed a genetically engineered ART architecture that combines the category structures of two different ART networks, FAM and EAM. We named this architecture UART (Universal ART). We analyzed the order of search in UART, that is the order according to which a FAM category or an EAM category is accessed in UART. This analysis allowed us to better understand UART's functionality. Experiments were also conducted to compare UART with other ART architectures, in a similar fashion as GFAM and GEAM were compared. Similar conclusions were drawn from this comparison, as in the comparison of GFAM and GEAM with other ART architectures. Finally, we analyzed the computational complexity of the genetically engineered ART architectures and we compared it with the computational complexity of other ART architectures, introduced into the literature. This analytical comparison verified our claim that the genetically engineered ART architectures produce better generalization and smaller sizes ART structures, at reduced computational complexity, compared to other ART approaches. In review, a methodology was introduced of how to combine the answers (categories) of ART architectures, using genetic algorithms. This methodology was successfully applied to FAM, EAM and FAM and EAM ART architectures, with success, resulting in ART neural networks which outperformed other ART architectures, previously introduced into the literature, and quite often produced ART architectures that attained optimal classification results, at reduced computational complexity.
Ph.D.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Engel, Jacob. "OFF-CHIP COMMUNICATIONS ARCHITECTURES FOR HIGH THROUGHPUT NETWORK PROCESSORS." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4112.

Full text
Abstract:
In this work, we present off-chip communications architectures for line cards to increase the throughput of the currently used memory system. In recent years there is a significant increase in memory bandwidth demand on line cards as a result of higher line rates, an increase in deep packet inspection operations and an unstoppable expansion in lookup tables. As line-rate data and NPU processing power increase, memory access time becomes the main system bottleneck during data store/retrieve operations. The growing demand for memory bandwidth contrasts the notion of indirect interconnect methodologies. Moreover, solutions to the memory bandwidth bottleneck are limited by physical constraints such as area and NPU I/O pins. Therefore, indirect interconnects are replaced with direct, packet-based networks such as mesh, torus or k-ary n-cubes. We investigate multiple k-ary n-cube based interconnects and propose two variations of 2-ary 3-cube interconnect called the 3D-bus and 3D-mesh. All of the k-ary n-cube interconnects include multiple, highly efficient techniques to route, switch, and control packet flows in order to minimize congestion spots and packet loss. We explore the tradeoffs between implementation constraints and performance. We also developed an event-driven, interconnect simulation framework to evaluate the performance of packet-based off-chip k-ary n-cube interconnect architectures for line cards. The simulator uses the state-of-the-art software design techniques to provide the user with a flexible yet robust tool, that can emulate multiple interconnect architectures under non-uniform traffic patterns. Moreover, the simulator offers the user with full control over network parameters, performance enhancing features and simulation time frames that make the platform as identical as possible to the real line card physical and functional properties. By using our network simulator, we reveal the best processor-memory configuration, out of multiple configurations, that achieves optimal performance. Moreover, we explore how network enhancement techniques such as virtual channels and sub-channeling improve network latency and throughput. Our performance results show that k-ary n-cube topologies, and especially our modified version of 2-ary 3-cube interconnect - the 3D-mesh, significantly outperform existing line card interconnects and are able to sustain higher traffic loads. The flow control mechanism proved to extensively reduce hot-spots, load-balance areas of high traffic rate and achieve low transmission failure rate. Moreover, it can scale to adopt more memories and/or processors and as a result to increase the line card's processing power.
Ph.D.
Department of Electrical and Computer Engineering
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
40

Sammut, Karl M. "Investigations of linear array architectures for neural network support." Thesis, University of Nottingham, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Koh, Jin Hou. "Simulation modeling and analysis of device-aware network architectures." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FKoh.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gao, Meihui. "Models and Methods for Network Function Virtualization (NFV) Architectures." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0025/document.

Full text
Abstract:
Avec la croissance exponentielle des demandes de service, les opérateurs ont déployé de nombreux équipements, et par conséquent, la gestion du réseau est devenue de plus en plus difficile et coûteuse. La virtualisation des fonctions réseau (NFV) a été proposée comme un nouveau paradigme pour réduire les coûts liés à l’acquisition et à la maintenance pour les réseaux de télécommunications. Dans ce travail de thèse, nous nous intéressons aux problèmes du chaînage des fonctions virtuelles (VNFs) qui combinent des décisions de localisation des VNFs et de routage des demandes. D'un point de vue d'optimisation, ce problème est une combinaison des problèmes de localisation (pour la partie d'installation des VNFs) et de conception de réseaux (pour la partie de routage). Ces deux problèmes ont été largement étudié dans la littérature. Cependant, leur combinaison représente des divers challenges en termes de modélisation et de résolution. Dans la première partie de cette thèse, nous considérons une version réaliste du problème du chaînage des VNFs (VNF-PR) afin de comprendre l'impact des différents aspects sur les coûts et les performances de gestion du réseau. Dans ce but, nous étendons le travail dans~\cite{Addis2015} en considérant des caractéristiques et des contraintes plus réalistes des infrastructures NFV et nous proposons un modèle de programmation linéaire et une heuristique mathématique pour le résoudre. Dans le but de mieux comprendre la structure du problème et ses propriétés, la deuxième partie de la thèse est orientée vers l'étude théorique du problème, où nous avons étudié une version compacte du problème du chaînage des VNFs. Nous fournissons des résultats sur la complexité de calcul sous divers cas de topologie et de capacité. Ensuite, nous proposons deux modèles et nous les testons sur un testbed avec plus de 100 instances différentes avec différents cas de capacité. Au final, nous abordons la scalabilité du problème en proposant des méthodes constructives et des méthodes heuristiques basées sur la programmation linéaire entière pour traiter efficacement des instances de taille grande (jusqu'à 60 nœuds et 1800 demandes). Nous montrons que les heuristiques proposées sont capables de résoudre efficacement des instances de taille moyenne (avec jusqu'à 30 nœuds et 1 000 demandes) de cas de capacité difficiles et de trouver de bonnes solutions pour les instances dures, où le modèle ne peut fournir aucune solution avec un temps de calcul limité
Due to the exponential growth of service demands, telecommunication networks are populated with a large and increasing variety of proprietary hardware appliances, and this leads to an increase in the cost and the complexity of the network management. To overcome this issue, the NFV paradigm is proposed, which allows dynamically allocating the Virtual Network Functions (VNFs) and therefore obtaining flexible network services provision, thus reducing the capital and operating costs. In this thesis, we focus on the VNF Placement and Routing (VNF-PR) problem, which aims to find the location of the VNFs to allocate optimally resources to serve the demands. From an optimization point of view, the problem can be modeled as the combination of a facility location problem (for the VNF location and server dimensioning) and a network design problem (for the demands routing). Both problems are widely studied in the literature, but their combination represents, to the best of our knowledge, a new challenge. We start working on a realistic VNF-PR problem to understand the impact of different policies on the overall network management cost and performance. To this end, we extend the work in [1] by considering more realistic features and constraints of NFV infrastructures and we propose a linear programming model and a math-heuristic to solve it. In order to better understand the problem structure and its properties, in the second part of our work, we focus on the theoretical study of the problem by extracting a simplified, yet significant variant. We provide results on the computational complexity under different graph topology and capacity cases. Then, we propose two mathematical programming formulations and we test them on a common testbed with more than 100 different test instances under different capacity settings. Finally, we address the scalability issue by proposing ILP-based constructive methods and heuristics to efficiently deal with large size instances (with up to 60 nodes and 1800 demands). We show that our proposed heuristics can efficiently solve medium size instances (with up to 30 nodes and 1000 demands) of challenging capacity cases and provide feasible solutions for large size instances of the most difficult capacity cases, for which the models cannot find any solution even with a significant computational time
APA, Harvard, Vancouver, ISO, and other styles
43

Alasadi, Emad Younis. "Enhancing network scalability by introducing mechanisms, architectures and protocols." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15874.

Full text
Abstract:
In this thesis, three key issues that restrict networks from scaling up so as to be able to cope with the rapid increase in traffic are investigated and series of approaches are proposed and tested for overcoming them. Firstly, scalability limitations owing to the use of a broadcast mechanism in one collision domain are discussed. To address this matter, servers under software-defined network architectures for eliminating discovery messages (SSED) are designed in this thesis and a backbone of floodless packets in an SDN LAN network is introduced. SSED has an innovative mechanism for defining the relationship between the servers and SDN architecture. Experimental results, after constructing and applying an authentic testbed, verify that SSED has the ability to improve upon the scalability of the traditional mechanism in terms of the number of switches and hosts. This is achieved by removing broadcast packets from the data and control planes as well as offering a better response time. Secondly, the scalability restrictions from using routers and the default gateway mechanism are explained. In this thesis, multiple distributed subnets using SDN architecture and servers to eliminate router devices and the default gateway mechanism (MSSERD) are introduced, designed and implemented as the general backbone for scalable multiple LAN-based networks. MSSERD's proposed components handle address resolution protocol (ARP) discovery packets and general IP packets across different subnets. Moreover, a general view of the network is provided through a multi-subnets discovery protocol (MDP). A 23 computers testbed is built and the results verify that MSSERD scales up the number of subnets more than traditional approaches, enhances the efficiency significantly, especially with high load, improves performance 2.3 times over legacy mechanisms and substantially reduces complexity. Finally, most of the available distributed-based architectures for different domains are reviewed and the aggregation discovery mechanism analysed to establish their impact on network scalability. Subsequently, a general distributed-centralised architecture with open-level control plane (OLC) architecture and a dynamic discovery hierarchical protocol (DHP) is introduced to provide better scalability in an SDN network. OLC can scale up the network with high performance even during high traffic.
APA, Harvard, Vancouver, ISO, and other styles
44

Ancajas, Dean Michael B. "Design of Reliable and Secure Network-On-Chip Architectures." DigitalCommons@USU, 2015. https://digitalcommons.usu.edu/etd/4150.

Full text
Abstract:
Network-on-Chips (NoCs) have become the standard communication platform for future massively parallel systems due to their performance, flexibility and scalability advantages. However, reliability issues brought about by scaling in the sub-20nm era threaten to undermine the benefits offered by NoCs. This dissertation demonstrates design techniques that address both reliability and security issues facing modern NoC architectures. The reliability and security problem is tackled at different abstraction levels using a series of schemes that combine information from the architecture-level as well as hardware-level in order to combat aging effects and meet secure design stipulations while maintaining modest power-performance overheads.
APA, Harvard, Vancouver, ISO, and other styles
45

Franovic, Tin. "Cortex inspired network architectures for spatio-temporal information processing." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142453.

Full text
Abstract:
The abundance of high-dimensional datasets provides scientists with a strong foundation in their research. With high-performance computing platforms becoming increasingly available and more powerful, large-scale data processing represents an important step toward modeling and understanding the underlying processes behind such data. In this thesis, we propose a general cortex-inspired information processing network architecture capable of capturing spatio-temporal correlations in data and forming distributed representations as cortical activation patterns. The proposed architecture has a modular and multi-layered organization which is efficiently parallelized to allow large-scale computations. The network allows unsupervised processing of multivariate stochastic time series, irregardless of the data source, producing a sparse de-correlated representation of the input features expanded by time delays. The features extracted by the architecture are then used for supervised learning with Bayesian confidence propagation neural networks and evaluated on speech classification and recognition tasks. Due to their rich temporal dynamics, we exploited auditory signals for speech recognition as an use case for performance evaluation. In terms of classification performance, the proposed architecture outperforms modern machine-learning methods such as support vector machines and obtains results comparable to other stateof-the-art speech recognition methods. The potential of the proposed scalable cortex-inspired approach to capture meaningful multivariate temporal correlations and provide insight into the model-free high- dimensional data decomposition basis is expected to be of particular use in the analysis of large brain signal datasets such as EEG or MEG.
APA, Harvard, Vancouver, ISO, and other styles
46

Mummaneni, Avanthi. "Analysis of the enzymatic network." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/4285.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2005.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (January 22, 2007) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
47

Morley, George David. "Analysis and design of ring-based transport networks." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60329.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Motiwala, Murtaza. "An architecture for network path selection." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43576.

Full text
Abstract:
Traditional routing protocols select paths based on static link weights and converge to new paths only when there is an outright reachability failure (such as a link or router failure). This design allows routing scale to hundreds of thousands of nodes, but it comes at the cost of functionality: routing provides only simple, single path connectivity. Networked applications in the wide-area, enterprise, and data center can all benefit from network protocols that allow traffic to be sent over multiple routes en route to a destination. This ability, also called multipath routing, has other significant benefits over single-path routing, such as more efficiently using network resources and recovering more quickly from network disruptions. This dissertation explores the design of an architecture for path selection in the network and proposes a "narrow waist" interface for networks to expose choice in routing traffic to end systems. Because most networks are also business entities, and are sensitive to the cost of routing traffic in their network, this dissertation also develops a framework for exposing paths based on their cost. For this purpose, this dissertation develops a cost model for routing traffic in a network. In particular, this dissertation presents the following contributions: * Design of path bits, a "narrow waist" for multipath routing. Our work ties a large number of multipath routing proposals by creating an interface (path bits) for decoupling the multipath routing protocols implemented by the network and end systems (or other network elements) making a choice for path selection. Path bits permit simple, scalable, and efficient implementations of multipath routing protocols in the network that still provide enough expressiveness for end systems to select alternate paths. We demonstrate that our interface is flexible and leads to efficient network implementations by building prototype implementations on different hardware and software platforms. * Design of path splicing, a multipath routing scheme. We develop, path splicing, a multipath routing technique, which uses random perturbations from the shortest path to create exponentially large number of paths with only a linear increase in state in a network. We also develop a simple interface to enable end systems to make path selection decisions. We present various deployment paths for implementing path splicing in both intradomain and interdomain routing on the Internet. * Design of low cost path-selection framework for a network. Network operators and end systems can have conflicting goals, where the network operators are concerned with saving cost and reducing traffic uncertainty; and end systems favor better performing paths. Exposing choice of routing in the network can thus, create a tension between the network operators and the end systems. We propose a path-selection framework where end systems make path selection decisions based on path performance and networks expose paths to end systems based on their cost to the network. This thesis presents a cost model for routing traffic in a network to enable network operators to reason about "what-if " scenarios and routing traffic on their network.
APA, Harvard, Vancouver, ISO, and other styles
49

Shah, Zawar Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Location tracking architectures for wireless VoIP." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2009. http://handle.unsw.edu.au/1959.4/43324.

Full text
Abstract:
A research area that has recently gained great interest is the development of network architectures relating to the tracking of wireless VoIP devices. This is particularly so for architectures based on the popular Session Initiation Protocol (SIP). Previous work, however, in this area does not consider the impact of combined VoIP and tracking on the capacity and call set-up time of the architectures. Previous work also assumes that location information is always available from sources such as GPS, a scenario that rarely is found in practice. The inclusion of multiple positioning systems in tracking architectures has not been hitherto explored. It is the purpose of this thesis to design and test SIP-based architectures that address these key issues. Our first main contribution is the development of a tracking-only SIP based architecture. This architecture is designed for intermittent GPS availability, with wireless network tracking as the back-up positioning technology. Such a combined tracking system is more conducive with deployment in real-world environments. Our second main contribution is the development of SIP based tracking architectures that are specifically aimed at mobile wireless VoIP systems. A key aspect we investigate is the quantification of the capacity constraints imposed on VoIP-tracking architectures. We identify such capacity limits in terms of SIP call setup time and VoIP QoS metrics, and determine these limits through experimental measurement and theoretical analyses. Our third main contribution is the development of a novel SIP based location tracking architecture in which the VoIP application is modified. The key aspect of this architecture is the factor of two increase in capacity that it can accommodate relative to architectures utilizing standard VoIP. An important aspect of all our tracking architectures is the Tracking Server. This server supplies the location information in the event of GPS unavailability. A final contribution of this thesis is the development of novel particle-filter based tracking algorithms that specifically address the GPS intermittency issue. We show how these filters interact with other features of our SIP based architectures in a seamless fashion.
APA, Harvard, Vancouver, ISO, and other styles
50

Long, Weili. "On the topology design of hose-model VPN networks /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20LONG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography