Dissertations / Theses on the topic 'In-network computing'

To see the other types of publications on this topic, follow the link: In-network computing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'In-network computing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Barros, Bruno Medeiros de. "Security architecture for network virtualization in cloud computing." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-18012017-094453/.

Full text
Abstract:
Network virtualization has been a quite active research area in the last years, aiming to tackle the increasing demand for high performance and secure communication in cloud infrastructures. In special, such research eforts have led to security solutions focused on improving isolation among multiple tenant of public clouds, an issue recognized as critical both by the academic community and by the technology Industry. More recently, the advent of Software-Defined Networks (SDN) and of Network Function Virtualization (NFV) introduced new concepts and techniques for addressing issues related to the isolation of network resources in multi-tenant clouds while improving network manageability and flexibility. Similarly, hardware technologies such as Single Root I/O Virtualization (SR-IOV) enable network isolation in the hardware level while improving performance in physical and virtual networks. Aiming to provide a cloud network environment that effciently tackles multi-tenant isolation, we present three complementary strategies for addressing the isolation of resources in cloud networks. These strategies are then applied in the evaluation of existing network virtualization architectures, exposing the security gaps associated to current technologies, and paving the path for novel solutions. We then propose a security architecture that builds upon the strategies presented, as well as upon SDN, NFV and SR-IOV technologies, to implement secure cloud network domains. The theoretical and experimental analyses of the resulting architecture show a considerable reduction of the attack surface in tenant networks, with a small impact over tenants\' intra-domain and inter-domain communication performance.
Virtualização de redes é uma área de pesquisa que tem ganho bastante atenção nos últimos anos, motivada pela necessidade de se implementar sistemas de comunicação seguros e de alta performance em infraestruturas de computação em nuvem. Em particular, os esforços de pesquisa nesta área têm levado ao desenvolvimento de soluções de segurança que visam aprimorar o isolamento entre os múltiplos inquilinos de sistemas de computação em nuvem públicos, uma demanda reconhecidamente crítica tanto pela comunidade acadêmica quanto pela indústria de tecnologia. Mais recentemente, o advento das Redes Definidas por Software (do inglês Software-Defined Networks - SDN) e da Virtualização de Funções de Rede (do inglês Network Function Virtualization - NFV) introduziu novos conceitos e técnicas que podem ser utilizadas para abordar questões de isolamento de redes virtualizadas em sistemas de computação em nuvem com múltiplos inquilinos, enquanto aprimoram a capacidade de gerenciamento e a flexibilidade de suas redes. Similarmente, tecnologias de virtualização assistida por hardware como Single Root I/O Virtualization - SR-IOV permitem a implementação do isolamento de recursos de hardware, melhorando o desempenho de redes físicas e virtualizadas. Com o intuito de implementar uma solução de virtualização de redes que aborda de maneira eficiente o problema de isolamento entre múltiplos inquilinos, nós apresentamos três estratégias complementares para o isolamento de recursos de rede em sistemas computação em nuvem. As estratégias apresentadas são então aplicadas na avaliação de arquiteturas de virtualização de rede existentes, revelando lacunas de segurança associadas às tecnologias utilizadas atualmente, e abrindo caminho para o desenvolvimento de novas soluções. Nós então propomos uma arquitetura de segurança que utiliza as estratégias apresentadas, e tecnologias como SDN, NFV e SR-IOV, para implementar domínios de rede seguros. As análises teórica e experimental da arquitetura proposta mostram considerável redução das superfícies de ataque em redes virtualizadas, com um pequeno impacto sobre o desempenho da comunicação entre máquinas virtuais de inquilinos da nuvem.
APA, Harvard, Vancouver, ISO, and other styles
2

Paraskelidis, Athanasios. "Wireless network segregation utilising modulo in industrial environments." Thesis, University of Portsmouth, 2010. https://researchportal.port.ac.uk/portal/en/theses/wireless-network-segregation-utilising-modulo-in-industrial-environments(ae94690a-560e-4f7b-93d8-130b4873de96).html.

Full text
Abstract:
With the success of wireless technologies in consumer electronics, standard wireless technologies are envisioned for the deployment in industrial environments as well. Industrial applications involving mobile subsystems or just the desire to save cabling make wireless technologies attractive. In industrial environments, timing and reliability are well catered by the current wired technologies. When wireless links are included, reliability and timing requirements are significantly more difficult to meet, due to the common problems that influence them such as interference, multipath and attenuation. Since the introduction of the IEEE 802.11 standard, researchers have moved from the concept of deploying a single channel and proposed the utilisation of multiple channels within a wireless network. This new scheme posed a new problem, the ability to coordinate the various channels and the majority of the proposed works focus on mechanisms that would reduce the adjacent channel interference caused by the use of partially overlapping channels. These mechanisms are mainly algorithms that define rules to the allocation of the channels for the wireless nodes during each transmission. Many of the approached proposed during the last years have two very common disadvantages, they are hard to implement in real life and they do not take full advantage of the available spectrum, because they use only non-overlapping channels. The industries demand for solutions which would not move away from using proprietary hardware and software and any changes required to be made should not limit the availability of support for their networks. This would keep the cost low as it is the main factor that industries decide to replace their wires with radio links. The proposed idea in this thesis borrows the concept of network segregation, firstly introduced for security purposes in wired networks, by dividing a wireless network into smaller independent subnetworks and in collaboration with a channel assignment, the Modulo. Modulo defines a set of rules that nodes should obey to when they transmit data. The utilization of multiple channels under the guidance of Modulo for each subnetwork, proves to improve the performance of an ad-hoc network even in noisy industrial environments with high levels of interference from external sources.
APA, Harvard, Vancouver, ISO, and other styles
3

Olsson, Eric J. "Literature survey on network concepts and measures to support research in network-centric operations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FOlsson.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, You M. Eng Massachusetts Institute of Technology. "Computing network coordinates in the presence of Byzantine faults." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46365.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 57-60).
Network coordinate systems allow for efficient construction of large-scale distributed systems on the Internet. Coordinates provide locality information in a compact way, without requiring each node to contact every potential neighbor; distances between two nodes' coordinates represent estimates of the network latency between them. Past work on network coordinates has assumed that all nodes in the system behave correctly. The techniques in these systems do not behave well when nodes are Byzantine. These Byzantine failures, wherein a faulty node can behave arbitrarily, can make the coordinate-based distance estimates meaningless. For example, a Byzantine node can delay responding to some other node, thus distorting that node's computation of its own location. We present a network coordinate system based on landmarks, reference nodes that are used for measurements, some of which may be Byzantine faulty. It scales linearly in the number of clients computing their coordinates and does not require excessive network traffic to allow clients to do so. Our results show that our system is able to compute accurate coordinates even when some landmarks are exhibiting Byzantine faults.
by You Zhou.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Crellin, Kenneth Thomas. "Network time : synchronisation in real time distributed computing systems." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/17933.

Full text
Abstract:
In the past, network clock synchronization has been sufficient for the needs of traditional distributed systems, for such purposes as maintaining Network File Systems, enabling Internet mail services and supporting other applications that require a degree of clock synchronization. Increasingly real time systems arc requiring high degrees of time synchronization. Where this is required, the common approach up until now has been to distribute the clock to each processor by means of hardware (e.g. GPS and cesium clocks) or to distribute time by means of an additional dedicated timing network. Whilst this has proved successful for real time systems, the use of present day high speed networks with definable quality of service from the protocol layers has lead to the possibility of using the existing data network to distribute the time. This thesis demonstrates that by using system integration and implementation of commercial off the shelf (COTS) products it is possible to distribute and coordinate the time of the computer time clocks to microsecond range. Thus providing close enough synchronization to support real time systems whilst avoiding the additional time, infrastructure and money needed to build and maintain a specialized timing network.
APA, Harvard, Vancouver, ISO, and other styles
6

Mousa, Alzawi Mohamed. "Autonomic computing : using adaptive neural network in self-healing systems." Thesis, Liverpool John Moores University, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.571894.

Full text
Abstract:
Self-management is the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. Investigation some issues related to complex systems such as; self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This research work focuses on self-healing, which is the most important component of Autonomic Computing. Self-healing is a technique that has different phases, which aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in a real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self- healing system should have the ability to modify its own behavior in response to changes within the environment. However, there are some challenges that still face the implementation of self-healing in real system adaptation. These challenges are monitoring, interpretation, resolution, and adaptation. Artificial Neural Networks have been proposed to overcome these challenges. Neural network proposed to minimize the error between the desired response and the actual output by modifying its weights. , ... ~' Furthermore, Neural Networks have a built-in capability to adapt their weights in nonstatinary environment, and that is required in real time problems as in self-healing systems. A recurrent neural network is used to show the ability of neural network to overcome the challenges associated with self-healing. A modified pipelined neural network is introduced to fulfill the requirements in this field. Two different applications were suggested and used to examine the validity of research work. Client server / / application has shown promising results comparing to the outcomes of feedforward -- neural network. Moreover, with the overcurrent relay experiment in the field of power system has achieved good results using pipelined recurrent neural network. The main point for the comparison between pipelined recurrent neural network and feedforward neural network is the continuous learning or online learning. This is important since autonomic systems aim to apply the monitoring of system behaviors and apply the suitable re configuration plan during the running time of the system.
APA, Harvard, Vancouver, ISO, and other styles
7

Tasiopoulos, A. "On the deployment of low latency network applications over third-party in-network computing resources." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10049954/.

Full text
Abstract:
An increasing number of Low Latency Applications (LLAs) in the entertainment (Virtual/Augmented Reality), Internet-of-Things (IoT), and automotive domains require response times that challenge the traditional application provisioning process into distant data centres. At the same time, there is a trend in deploying In-Network Computing Resources (INCRs) closer to end users either in the form of network equipment, with capabilities of performing general-purpose computations, and/or in the form of commercial off-the-self “data centres in a box”, i.e., cloudlets, placed at different locations of Internet Service Providers (ISPs). That is, INCRs extend cloud computing at the edge and middle-tier locations of the network, providing significantly smaller response times than those achieved by the current “client-to-cloud” network model. In this thesis, we argue about the necessity of exploiting INCRs for application provisioning with the purpose of improving LLAs’ Quality of Service (QoS) by essentially deploying applications closer to end users. To this end, this thesis investigates the deployment of LLAs over INCRs under fixed, mobile, and disrupted user connectivity environments. In order to fully reap the benefits of INCRs, we develop for each connectivity scenario algorithmic frameworks that are centred around the concept of a market, where LLAs lease existing INCRs. The proposed frameworks take into account the particular characteristics of INCRs, such as their limited capacity in hosting application instances, and LLAs, by addressing the number of instances each application should deploy at each computing resource over time. Furthermore, since typically the smooth operation of network applications is supported by Network Functions, such as load balancers, firewalls etc., we consider the deployment of complementary Virtual Network Functions for backing LLAs’ provisioning over INCRs. Overall, the key goal of this thesis is the investigation of using an enhanced Internet through INCRs as the communication platform for LLAs.
APA, Harvard, Vancouver, ISO, and other styles
8

Ju, Weiyu. "Mobile Deep Neural Network Inference in Edge Computing with Resource Restrictions." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25038.

Full text
Abstract:
Recent advances in deep neural networks (DNNs) have substantially improved the accuracy of intelligent applications. However, the pursuit of a higher accuracy has led to an increase in the complexity of DNNs, which inevitably increases the inference latency. For many time-sensitive mobile inferences, such a delay is intolerable and could be fatal in many real-world applications. To solve this problem, one effective scheme known as DNN partition is proposed, which significantly improves the inference latency by partitioning the DNN to a mobile device and an edge server to jointly process the inference. This approach utilises the stronger computing capacity of the edge while reducing the data transmission. Nevertheless, this approach requires a reliable network connection, which is oftentimes unstable. Therefore, DNN partition is vulnerable in the presence of service outages. In this thesis, we are motivated to investigate how to maintain the quality of the service during service outages to avoid interruptions. Inspired by the recently developed early exit technique, we propose three solutions: (1) When the service outage time is predictable, we propose eDeepSave to decide which frames to process during the service outage. (2) When the service outage time is not predictable but relatively short, we design LEE to effectively learn the optimal exit point in a per-instance manner. (3) When the service outage time is not predictable and relatively long, we present the DEE scheme to learn the optimal action (to exit or not) at each exit point, so that the system can dynamically exit the inference by utilising the observed environmental information. For each scheme, we provide detailed mathematical proofs of the performance and then test their performance in real-world experiments as well as the extensive simulations. The results of the three schemes demonstrate their effectiveness in maintaining the service during the service outage under a variety of scenarios.
APA, Harvard, Vancouver, ISO, and other styles
9

Nowshin, Fabiha. "Spiking Neural Network with Memristive Based Computing-In-Memory Circuits and Architecture." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103854.

Full text
Abstract:
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. There are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We develop a novel input and output processing engine for our network and demonstrate the spatio-temporal information processing capability. We demonstrate an accuracy of a 100% with our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
M.S.
In recent years neuromorphic computing systems have achieved a lot of success due to its ability to process data much faster and using much less power compared to traditional Von Neumann computing architectures. Artificial Neural Networks (ANNs) are models that mimic biological neurons where artificial neurons or neurodes are connected together via synapses, similar to the nervous system in the human body. here are two main types of Artificial Neural Networks (ANNs), Feedforward Neural Network (FNN) and Recurrent Neural Network (RNN). In this thesis we first study the types of RNNs and then move on to Spiking Neural Networks (SNNs). SNNs are an improved version of ANNs that mimic biological neurons closely through the emission of spikes. This shows significant advantages in terms of power and energy when carrying out data intensive applications by allowing spatio-temporal information processing capability. On the other hand, emerging non-volatile memory (eNVM) technology is key to emulate neurons and synapses for in-memory computations for neuromorphic hardware. A particular eNVM technology, memristors, have received wide attention due to their scalability, compatibility with CMOS technology and low power consumption properties. In this work we develop a spiking neural network by incorporating an inter-spike interval encoding scheme to convert the incoming input signal to spikes and use a memristive crossbar to carry out in-memory computing operations. We demonstrate the accuracy of our design through a small-scale hardware simulation for digit recognition and demonstrate an accuracy of 87% in software through MNIST simulations.
APA, Harvard, Vancouver, ISO, and other styles
10

Shafaatdoost, Mani. "Approaches to Provisioning Network Topology of Virtual Machines in Cloud Systems." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/784.

Full text
Abstract:
The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.
APA, Harvard, Vancouver, ISO, and other styles
11

Tay, Chee Bin Mui Whye Kee. "An architecture for network centric operations in unconventional crisis : lessons learnt from Singapore's SARS experience /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FTay.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Thomeczek, Gregor. "Data centric resource and capability management in modern network enabled vehicle fleets." Thesis, University of Brighton, 2015. https://research.brighton.ac.uk/en/studentTheses/36663467-e75e-4c04-bb60-fe5c2062d404.

Full text
Abstract:
The objective of this thesis is to improve battlefield communications capability through improved management of existing platform and fleet level resources. Communication is a critical capability for any platform node deployed on a modern battlefield and enables vital Network Enabled Capabilities (NEC). However, the dynamicity and unpredictability of wireless battlefield networks, as well as the constant threat of equipment damage make wireless battlefield networks inherently unreliable and as such the provision of a stable communication represents a significant technology management challenge. Fulfilling increasingly complex communications requirements of diverse platform types in a chaotic and changing battlefield environment requires the use of novel Resource and Capability Management Algorithms (RCMA) informed by application level context data to manage limited heterogeneous resources at the platform and the fleet level while fulfilling current mission goals.
APA, Harvard, Vancouver, ISO, and other styles
13

Brandon, Lawrence. "Allowing the advantaged user in a network centric system to get through the disadvantaged interface." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FBrandon.pdf.

Full text
Abstract:
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Goshorn, Rachel. "September 2009." Description based on title screen as viewed on November 05, 2009. Author(s) subject terms: Networks, Systems, Disadvantaged Interfaces, Network-Centric Systems, Mitigating disadvantaged interfaces Includes bibliographical references (p. ). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
14

Mahmoud, Qusay H. "Evolution of network computing paradigms : applications of mobile agents in wired and wireless networks." Thesis, Middlesex University, 2002. http://eprints.mdx.ac.uk/10745/.

Full text
Abstract:
The World Wide Web (or Web for short) is the largest client-server computing system commonly available, which is used through its widely accepted universal client (the Web browser) that uses a standard communication protocol known as the HyperText Transfer Protocol (HTTP) to display information described in the HyperText Markup Language (HTML). The current Web computing model allows the execution of server-side applications such as Servlets and client-side applications such as Applets. However, it offers limited support for another model of network computing where users would be able to use remote, and perhaps more powerful, machines for their computing needs. The client-server model enables anyone with a Web-enabled device ranging from desktop computers to cellular telephones, to retrieve information from the Web. In today's information society, however, users are overwhelmed by the information with which they are confronted on a daily basis. For subscribers of mobile wireless data services, this may present a problem. Wireless handheld devices, such as cellular telephones are connected via wireless networks that suffer from low bandwidth and have a greater tendency for network errors. In addition, wireless connections can be lost or degraded by mobility. Therefore, there a need for entities that act on behalf of users to simplify the tasks of discovering and managing network computing resources. It has been said that software agents are a solution in search of a problem. Mobile agents, however, are inherently distributed in nature, and therefore they represent a natural view of a distributed system. They provide an ideal mechanism for implementing complex systems, and they are well suited for applications that are communicationscentric such as Web-based network computing. Another attractive area of mobile agents is processing data over unreliable networks (such as wireless networks). In such an environment, the low reliability network can be used to transfer agents rather than a chunk. of data. The agent can travel to the nodes of the network, collect or process information without the risk of network disconnection, then return home. The publications of this doctorate by published works report on research undertaken in the area of distributed systems with emphasis on network computing paradigms, Web-based distributed computing, and the applications of mobile agents in Web-based distributed computing and wireless computing. The contributions of this collection of related papers can be summarized in four points. First, I have shown how to extend the Web to include computing resources; to illustrate the feasibility of my approach I have constructed a proof of concept implementation. Second, a mobile agent-based approach to Web-based distributed computing, that harness the power of the Web as a computing resource, has been proposed and a system has been prototyped. This, however, means that users will be able to use remote machines to execute their code, but this introduces a security risk. I need to make sure that malicious users cannot harm the remote system. For this, a security policy design pattern for mobile Java code has been developed. Third, a mediator-based approach to wireless client/server computing has been proposed and guidelines for implementing it have been published. This approach allows access to Internet services and distributed object systems from resource-constraint handheld wireless devices such as cellular telephones. Fourth and finally, a mobile agent-based approach to the Wireless Internet has been designed and implemented. In this approach, remote mobile agents can be accessed and used from wireless handheld devices. Handheld wireless devices will benefit greatly from this approach since it overcomes wireless network limitations such as low bandwidth and disconnection, and enhances the functionality of services by being able to operate without constant user input.
APA, Harvard, Vancouver, ISO, and other styles
15

Park, Hee Yong. "Peer List Update Manager (PLUM) implementation in Open Computing Exchanging and Arbitration Network (OCEAN)." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000151.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains ix, 44 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
16

AGUIARI, DAVIDE. "Exploring Computing Continuum in IoT Systems: Sensing, Communicating and Processing at the Network Edge." Doctoral thesis, Università degli Studi di Cagliari, 2021. http://hdl.handle.net/11584/311478.

Full text
Abstract:
As Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum.
APA, Harvard, Vancouver, ISO, and other styles
17

Aguiari, Davide. "Exploring Computing Continuum in IoT Systems : sensing, communicating and processing at the Network Edge." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS131.

Full text
Abstract:
L'Internet des objets (IoT), ne comprenant à l'origine que quelques dispositifs de détection simple, atteint aujourd’hui 34 milliards d’objets connectés d'ici fin 2020. Ces objets ne peuvent plus être définis comme de simples capteurs de surveillance. Les capacités de l'IoT ont été améliorées ces dernières années tandis-que que les capacités de calcul et de stockage de masse sont devenus des marchandises. Aux débuts de l'IoT, le traitement et le stockage étaient généralement effectués dans le cloud. Les nouvelles architectures IoT sont capables d'exécuter des tâches complexes directement sur l'appareil, permettant ainsi le concept d'un continuum de calcul étendu. Les scénarios critiques et temps réel, comme par exemple la détection de véhicules autonomes, la surveillance de zone ou le sauvetage en cas de catastrophe, nécessitent que l’ensemble des acteurs impliqués soient coordonnés et collaborent sans interaction humaine vers un objectif commun, partageant des données et des ressources, même dans les zones couvertes par des réseaux intermittents. Cela pose de nouveaux problèmes dans les systèmes distribués, la gestion des ressources, l'orchestration des appareils et le traitement des données. Ce travail propose un nouveau cadre de communication et d'orchestration, à savoir le C-Continuum, conçu dans des architectures IoT hétérogènes à travers plusieurs scénarios d'application. Ce travail se concentre pour gérer les ressources sur deux macro-scénarios clés de durabilité : (a) la détection et la sensibilisation à l'environnement, et (b) le soutien à la mobilité électrique. Dans le premier cas, un mécanisme de mesure de la qualité de l'air sur une longue période avec différentes applications à l'échelle mondiale (3 continents et 4 pays) est introduit. Le système a été développé en interne depuis la conception du capteur jusqu'aux opérations de mist-computing effectuées par les nœuds. Dans le deuxième scénario une technique pour transmettre de grandes quantités de données, entre un véhicule en mouvement et un centre de contrôle est proposé. Ces données sont de haute granularité temporelle relatives et permettent conjointement d'allouer des tâches sur demande dans le continuum de calcul
As Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum
APA, Harvard, Vancouver, ISO, and other styles
18

DE, BENEDICTIS MARCO. "Security and trust in a Network Functions Virtualisation Infrastructure." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2842509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Blatt, Nicole Ilene. "Trust and influence in the information age : operational requirements for network centric warfare /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Dec%5FBlatt.pdf.

Full text
Abstract:
Thesis (M.A. in Security Studies (Defense Decision-Making and Planning))--Naval Postgraduate School, Dec. 2004.
Thesis Advisor(s): Dorothy Denning, Scott Jasper. Includes bibliographical references (p. 93-96). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Veisllari, Raimena. "Employing Ethernet Multiple Spanning Tree Protocol in an OpMiGua network." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10913.

Full text
Abstract:
Hybrid optical packet/circuit switched networking architectures are increasingly becoming an interesting research field. They integrate and combine the high resource utilization of statistically multiplexed packet switched networks with the low processing requirements and guaranteed quality of service provided by circuit switched networks. The aim of this thesis is to integrate the OpMiGua hybrid optical network with Ethernet. Specifically, the work is focused on the compatibility of the Ethernet’s loop-free topology protocols with the redundant multiple traffic service paths of OpMiGua. We analyse the problems and limitations imposed on the network architecture and propose our topology solution called the SM chain-connectivity. The analysis and the proposed schemes are verified based on results obtained from simulations. Furthermore, we design an integrated logical OpMiGua node that relies on an Ethernet switch instead of the Optical Packet Switch for the Statistically Multiplexed traffic. To date, to our knowledge there are no studies analysing the compatibility of Ethernet and its protection mechanisms in a hybrid optical network. This is the first work addressing the use of Ethernet in OpMiGua.
APA, Harvard, Vancouver, ISO, and other styles
21

Patnaik, Somani. "An electrical network model for computing current distribution in a spirally wound lithium ion cell." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/85400.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2012.
"September 2012." Page 74 blank. Cataloged from PDF version of thesis.
Includes bibliographical references.
Lithium ion batteries are the most viable option for electric vehicles but they still have significant limitations. Safety of these batteries is one of the concerns that need to be addressed when they are used in mainstream vehicles, because of heating issues that may lead to thermal runaway. This work aims at supplementing the existing electrochemical heat distribution model of a spirally wound lithium ion battery with an electrical network that can model the heat losses due to electric resistances of the current collectors. The developed electrical network model is used to calculate the current and state-of-charge distribution throughout the spiral jelly roll, which can be used to determine electric heat losses. The results obtained from this model can then be used to optimize the shape and dimensions of the current collectors as well as the materials used in them.
by Somani Patnaik.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

AVINO, GIUSEPPE. "Development and Performance Evaluation of Network Function Virtualization Services in 5G Multi-Access Edge Computing." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2875737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tay, Chee Bin, and Whye Kee Mui. "An architecture for network centric operations in unconventional crisis: lessons learnt from Singapore's SARS experience." Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1303.

Full text
Abstract:
Approved for public release, distribution is unlimited
Singapore and many parts of Asia were hit with Severe Acute Respiratory Syndrome (SARS) in March 2003. The spread of SARS lead to a rapidly deteriorating and chaotic situation. Because SARS was a new infection, there was no prior knowledge that could be referenced to tackle such a complex, unknown and rapidly changing problem. Fortunately, through sound measures coupled with good leadership, quick action and inter-agency cooperation, the situation was quickly brought under control. This thesis uses the SARS incident as a case study to identify a set of network centric warfare methodologies and technologies that can be leveraged to facilitate the understanding and management of complex and rapidly changing situations. The same set of methodologies and technologies can also be selectively reused and extended to handle other situations in asymmetric and unconventional warfare.
Office of Force Transformation, DoD US Future Systems Directorate, MINDEF Singapore.
Lieutenant, Republic of Singapore Army
Civilian, Defence Science and Technology Agency, Singapore
APA, Harvard, Vancouver, ISO, and other styles
24

Jimison, David M. "The dilution of avant-garde subcultural boundaries in network society." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53471.

Full text
Abstract:
This dissertation identifies the diluting effects that network society has had on the avant-garde subcultures, by first building a framework through which to understand the social structure and spatial production of the historical avant-garde, and then comparing this with contemporary avant-garde movements. The avant-garde is a cultural tradition that originated in modern 18th century Europe and North America, that critically responds to hegemonic power structures and mainstream cultural assumptions. I use the term “avant-garde subcultures” because my research focuses on the entire social group of the avant-garde. Most scholarship on the avant-garde has overlooked the importance that social relations, in particular supportive actors, and collaborative spaces have served in the creativity of the avant-garde. During the past twenty years, as society has shifted into a dependence on networked interactive technologies, the boundaries which protect these avant-garde spaces and social relations were diluted. As a result, avant-garde subcultures have entered a phase of recursively repeating themselves and culturally stagnating. I begin by reviewing the historical avant-garde and subcultures, building an overarching theory that explains that avant-garde is a type of subculture. Using past scholarship that maps the conceptual lineage from early bohemians to 1970s punk rock, I synthesize a set of traits which all avant-garde subcultures exhibit, and which can be used to build their genealogy. I then extend this genealogy to contemporary art practitioners, to prove that the avant-garde tradition continues to this day. Next, I develop a philosophical understanding of the importance of space for hegemonic power structures, based largely on the work of Henri Lefebvre. I explain how avant-garde subcultures produce spaces of representation in the cafes, bars and night clubs they inhabit, which challenge hegemony by being different from normal values and aesthetics. I reference first-hand accounts of these spaces of representation, to show how they enable the collaboration and creative thinking that is most often associated with the avant-garde. The avant-garde protect these spaces through a set of cultural boundaries: fashion, slang, esoteric knowledge, accumulation, and physical space. Manuel Castell's concept of network society depicts how hegemonic power structures have become pervasive, and thus can overcome the boundaries of avant-garde subcultures. As a result, avant-garde subcultures have increasingly become retrogressive and fluid. Some avant-garde practitioners, such as tactical media, have evolved methods for addressing these problems. While these are effective in continuing the avant-garde tradition of introducing difference, there are no adequate methods for producing new spaces of representation. I examine Eyebeam, an arts and technology center, which has since 1997 provided a space for many contemporary practitioners. While unique in its circumstances, Eyebeam has adopted several processes which have enabled it to overcome the diluting effects of network society, thereby providing a potential model for building future spaces of representation.
APA, Harvard, Vancouver, ISO, and other styles
25

Sadat, Mohammad Nazmus. "QoE-Aware Video Communication in Emerging Network Architectures." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin162766498933367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shinde, Swapnil Sadashiv. "Radio Access Network Function Placement Algorithms in an Edge Computing Enabled C-RAN with Heterogeneous Slices Demands." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20063/.

Full text
Abstract:
Network slicing provides a scalable and flexible solution for resource allocation with performance guaranty and isolation from other services in the 5G architecture. 5G has to handle several active use cases with different requirements. The single solution to satisfy all the extreme requirements requires overspecifies and high-cost network architecture. Further, to fulfill the diverse requirements, each service will require different resources from a radio access network (RAN), edge, and central offices of 5G architecture and hence various deployment options. Network function virtualization allocates radio access network (RAN) functions in different nodes. URLLC services require function placement nearer to the ran to fulfill the lower latency requirement while eMBB require cloud access for implementation. Therefore arbitrary allocation of network function for different services is not possible. We aim to developed algorithms to find service-based placement for RAN functions in a multitenant environment with heterogeneous demands. We considered three generic classes of slices of eMBB, URLLC, mMTC. Every slice is characterized by some specific requirements, while the nodes and the links are resources constrained. The function placement problem corresponds to minimize the overall cost of allocating the different functions to the different nodes organized in layers for respecting the requirements of the given slices. Specifically, we proposed three algorithms based on the normalized preference associated with each slice on different layers of RAN architecture. The maximum preference algorithm places the functions on the most preferred position defined in the preference matrix. On the other hand, the proposed modified preference algorithm provides solutions by keeping track of the availability of computational resources and latency requirements of different services. We also used the Exhaustive Search Method for solving a function allocation problem.
APA, Harvard, Vancouver, ISO, and other styles
27

Ramagoffu, Madisa Modisaotsile. "The impact of network related factors on Internet based technology in South Africa : a cloud computing perspective." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/22820.

Full text
Abstract:
Outsourcing, consolidation and cost savings of IT services, are increasingly becoming an imperative source of competitive advantage and a great challenge for most local and global businesses. These challenges not only affect consumers, but also the service providers’ community. As IT is slowly becoming commoditised, consumers, such as business organisations, are increasingly expecting IT services that will mimic other utility services such as water, electricity, and telecommunications.To this end, no one model has been able to emulate these utilities in the computing arena.Cloud Computing is the recent computing phenomenon that attempts to be the answer to most business IT requirements. This phenomenon is gaining traction in the IT industry, with a promise of advantages such as cost reduction, elimination of upfront capital outlay, pay per use models, shared infrastructure, and high flexibility allowing users and providers to handle high elasticity of demand. The critical success factor that remains unanswered for most IT organisations and its management is: What is the effect of the communication network factors on Internet based technology such as Cloud Computing, given the emerging market context.This study therefore, investigates the effect of four communication network factors (price, availability, reliability and security) in the adoption of Cloud Computing by IT managers in a South African context, including their propensity to adopt the technology. The study investigates numerous technology adoption theories, in which Technology, Organisation and Environment (TOE) framework is selected due to it having an organisational focus as opposed to an individual focus.Based on the results, this study proposes that Bandwidth (Pricing and Security) should be included into any adoption model that involves services running on the Internet. The study makes an attempt to contribute to the emerging literature of Cloud Computing, Internet in South Africa, in addition to offering organisations considering adoption and Cloud Providers’ significant ideas to consider for Cloud Computing adoption.
Dissertation (MBA)--University of Pretoria, 2012.
Gordon Institute of Business Science (GIBS)
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
28

Cazin, Nicolas. "A replay driven model of spatial sequence learning in the hippocampus-prefrontal cortex network using reservoir computing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1133/document.

Full text
Abstract:
Alors que le rat apprend à chercher de multiples sources de nourriture ou d'eau, des processus d'apprentissage de séquences spatiales et de rejeu ont lieu dans l'hippocampe et le cortex préfrontal.Des études récentes (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) mettent en évidence que la navigation spatiale dans l'hippocampe de rat implique le rejeu de l'activation de cellules de lieu durant les étant de sommeil et d'éveil en générant des petites sous séquences contigues d'activation de cellules de lieu cohérentes entre elles. Ces fragments sont observés en particulier lors d'évènements sharp wave ripple (SPWR).Les phénomènes de rejeu lors du sommeil dans le contexte de la consolidation de la mémoire à long terme ont beaucoup attiré l'attention. Ici nous nous focalisons sur le rôle du rejeu pendant l'état d'éveil.Nous formulons l'hypothèse que ces fragments peuvent être utilisés par le cortex préfrontal pour réaliser une tâche d'apprentissage spatial comprenant plusieurs buts.Nous proposons de développer un modèle intégré d'hippocampe et de cortex préfrontal capable de générer des séquences d'activation de cellules de lieu.Le travail collaboratif proposé prolonge les travaux existants sur un modèle de cognition spatiale pour des tâches orientés but plus simples (Barrera and Weitzenfeld 2008; Barrera et al. 2015) avec un nouveau modèle basé sur le rejeu pour la formation de mémoire dans l'hippocampe et l'apprentissage et génération de séquences spatiales par le cortex préfrontal.En contraste avec les travaux existants d'apprentissage de séquence qui repose sur des règles d'apprentissage sophistiquées, nous proposons d'utiliser un paradigme calculatoire appelé calcul par réservoir (Dominey 1995) dans lequel des groupes importants de neurones artificiels dont la connectivité est fixe traitent dynamiquement l'information au travers de réverbérations. Ce modèle calculatoire par réservoir consolide les fragments de séquence d'activations de cellule de lieu en une plus grande séquence qui pourra être rappelée elle-même par des fragments de séquence.Le travail proposé est supposé contribuer à une nouvelle compréhension du rôle du phénomène de rejeu dans l'acquisition de la mémoire dans une tâche complexe liée à l'apprentissage de séquence.Cette compréhension opérationnelle sera mise à profit et testée dans l'architecture cognitive incarnée d'un robot mobile selon l'approche animat (Wilson 1991) [etc...]
As rats learn to search for multiple sources of food or water in a complex environment, processes of spatial sequence learning and recall in the hippocampus (HC) and prefrontal cortex (PFC) are taking place. Recent studies (De Jong et al. 2011; Carr, Jadhav, and Frank 2011) show that spatial navigation in the rat hippocampus involves the replay of place-cell firing during awake and sleep states generating small contiguous subsequences of spatially related place-cell activations that we will call "snippets". These "snippets" occur primarily during sharp-wave-ripple (SPWR) events. Much attention has been paid to replay during sleep in the context of long-term memory consolidation. Here we focus on the role of replay during the awake state, as the animal is learning across multiple trials.We hypothesize that these "snippets" can be used by the PFC to achieve multi-goal spatial sequence learning.We propose to develop an integrated model of HC and PFC that is able to form place-cell activation sequences based on snippet replay. The proposed collaborative research will extend existing spatial cognition model for simpler goal-oriented tasks (Barrera and Weitzenfeld 2008; Barrera et al. 2015) with a new replay-driven model for memory formation in the hippocampus and spatial sequence learning and recall in PFC.In contrast to existing work on sequence learning that relies heavily on sophisticated learning algorithms and synaptic modification rules, we propose to use an alternative computational framework known as reservoir computing (Dominey 1995) in which large pools of prewired neural elements process information dynamically through reverberations. This reservoir computational model will consolidate snippets into larger place-cell activation sequences that may be later recalled by subsets of the original sequences.The proposed work is expected to generate a new understanding of the role of replay in memory acquisition in complex tasks such as sequence learning. That operational understanding will be leveraged and tested on a an embodied-cognitive real-time framework of a robot, related to the animat paradigm (Wilson 1991) [etc...]
APA, Harvard, Vancouver, ISO, and other styles
29

Colajanni, Gabriella. "Constrained Optimization Problems in Network Models." Doctoral thesis, Università di Catania, 2019. http://hdl.handle.net/10761/4105.

Full text
Abstract:
Operations Research is the field of mathematics that deals with solving various application problems. Constrained optimization problems are one of the most important and useful fields of mathematics, particularly in Operations Research. In this thesis, we focus our attention on some mathematical models that are decision problems and which are all based on networks and applied to different real situations. We analyze different thematic areas such as Cloud Computing, Financial Market, Business Management and Cybersecurity and for each of them we formulate the associated linear or nonlinear constrained problems which allows us to solve the decision problems related to the specific applications. The purpose of one of our mathematical models, in this thesis, is to represent a cloud environment. This mathematical model could allows us to identify a rational strategy for reaching a final goal, which is to maximize the Iaas provider's profit. We get a mixed-Integer nonlinear programming problem, which can be solved through the proposed computational algorithm. A second step is the linearization of the problem. The effectiveness of the model and of the algorithm is tested, by comparing the final data with the results obtained by solving the linearized problem through an existing software. Another topic we have dealt with in depth in this thesis is the financial market. We studied some optimization models based on networks which allow us to formulate two new multi-period portfolio selection problems as Markowitz mean-variance optimization problems with intermediaries, and therefore with transaction costs, the addition of capital gains tax, but also with short selling and transfer of securities. We proposed two constrained Integer nonlinear programming problems with which it is possible to establish if and when it is suitable to buy and to sell financial securities, not only while maximizing the profits, but also while minimizing the risk (through the use of a weight). We applied the Lagrange theory and analyzed the variational inequality studying an optimization model for business management and cybersecurity investments.
APA, Harvard, Vancouver, ISO, and other styles
30

Chkirbene, Zina. "Network topologies for cost reduction and QoS improvement in massive data centers." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK002/document.

Full text
Abstract:
L'expansion des services en ligne, l'avènement du big data, favorisé par l'internet des objets et les terminaux mobiles, a entraîné une croissance exponentielle du nombre de centres de données qui fournissent des divers services de cloud computing. Par conséquent, la topologie du centre de données est considérée comme un facteur d'influence sur la performance du centre de données. En effet, les topologies des centres de données devraient offrir une latence faible, une longueur de chemin moyenne réduite avec une bande passante élevée. Ces exigences augmentent la consommation énergétique dans les centres de données. Dans cette dissertation, différentes solutions ont été proposées pour surmonter ces problèmes. Tout d'abord, nous proposons une nouvelle topologie appelée LCT (Linked Cluster Topology) qui augmente le nombre de nœuds, améliore la connexion réseau et optimise le routage des données pour avoir une faible latence réseau. Une nouvelle topologie appelée VacoNet (Variable connexion Network) a été également présentée. VacoNet offre un nouveau algorithme qui définit le exact nombre de port par commutateur pour connecter le nombre de serveurs requis tout en réduisant l'énergie consommée et les matériaux inutilisés (câbles, commutateurs). En outre, nous _étudions une nouvelle technique pour optimiser la consumation d'énergie aux centres de données. Cette technique peut périodiquement estimer la matrice de trafic et gérer l'_état des ports de serveurs tout en maintenant le centre de données entièrement connecté. La technique proposée prend en considération le trafic réseau dans la décision de gestion des ports
Data centers (DC) are being built around the world to provide various cloud computing services. One of the fundamental challenges of existing DC is to design a network that interconnects massive number of nodes (servers)1 while reducing DC' cost and energy consumption. Several solutions have been proposed (e.g. FatTree, DCell and BCube), but they either scale too fast (i.e., double exponentially) or too slow. Effcient DC topologies should incorporate high scalability, low latency, low Average Path Length (APL), high Aggregated Bottleneck Throughput (ABT) and low cost and energy consumption. Therefore, in this dissertation, different solutions have been proposed to overcome these problems. First, we propose a novel DC topology called LCT (Linked Cluster Topology) as a new solution for building scalable and cost effective DC networking infrastructures. The proposed topology reduces the number of redundant connections between clusters of nodes, while increasing the numbers of nodes without affecting the network bisection bandwidth. Furthermore, in order to reduce the DCs cost and energy consumption, we propose first a new static energy saving topology called VacoNet (Variable Connection Network) that connects the needed number of servers while reducing the unused materials (cables, switches). Also, we propose a new approach that exploits the correlation in time of internode communication and some topological features to maximize energy saving without too much impacting the average path length
APA, Harvard, Vancouver, ISO, and other styles
31

Carlsson, Jimmy. "Sustainability and service-oriented systems in network-centric environments." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5820.

Full text
Abstract:
Our modern information society provides us with a tremendous amount of information. Several issues have surfaced due to the complexity inherent in the handling of the information systems. One of the most important issues is that of providing an architecture and methodology that provide for the development and maintenance of complex, distributed information systems. As the information flow and quantity hinders us from having qualitative information when needed, the architecture must address the reach, richness and value of the information. Network-centric warfare is a problem domain that has been initiated to meet the power of information. To be able to support such continouos sustainability, a robust network infrastructure is critical. A systemic perspective on network-centric environments as well as a technical perspective on network-centric environment shows that, although promising, contemporary implementations having a service-oriented architecture lack support for physical scalability and a cognitive decoupling that would provide for multiple users acting on the same environment. Consequently, a service-oriented layered architecture for communicating entities is presented where these issues are addressed. For verification, a demonstrator is developed upon a service-oriented layered architecture for communicating based on a network-centric warfare scenario.
APA, Harvard, Vancouver, ISO, and other styles
32

Khan, Amin M. "Managing incentives in community network clouds." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/396273.

Full text
Abstract:
Internet and communication technologies have lowered the costs for communities to collaborate, leading to new services like user-generated content and social computing, and through collaboration, collectively built infrastructures like community networks have also emerged. While community networks focus solely on sharing of network bandwidth, community network clouds extend this sharing to provide for applications of local interest deployed within community networks through collaborative efforts to provision cloud infrastructures. Community network clouds complement the traditional large-scale public cloud providers similar to the model of decentralised edge clouds by bringing both content and computation closer to the users at the edges of the network. Community network clouds are based on the principle of reciprocal sharing and most of their users are moved by altruistic principles. However, as any other human organisation, these networks are not immune to overuse, free-riding, or under-provisioning, specially in scenarios where users may have motivations to compete for scarce resources. We focus in this thesis on the incentives based resource regulations mechanisms to derive practical ways of implementing arbitration when such contention for limited resources occurs. We first design these regulation mechanisms for the local level where stronger social relationships between the community members imply trust, and ensure adherence to the system policies. We next extend the mechanisms for larger communities of untrusted users, where rational users may be motivated to deviate for their personal gains, and develop a distributed framework for guaranteeing trust in the resource regulation. Such mechanisms assist in encouraging contribution by the community members, and will help towards adoption, sustainability, and growth of the community cloud model.
El Internet y las tecnologías de la comunicación han bajado los costos de colaborar en comunidad, dando lugar a nuevos servicios, como los contenidos generados por usuarios y la informática social y, por medio de la colaboración, han surgido infraestructuras construídas colectivamente, como las redes comunitarias. Mientras las redes comunitarias se centran exclusivamente en el intercambio de ancho de banda de la red, las nubes comunitarias extienden este intercambio para proporcionar aplicaciones de interés local, desplegadas en las redes comunitarias a través de actividades de colaboración para proveer infraestructuras en la nube. Las nubes comunitarias complementan a los proveedores tradicionales de la nube a gran escala, en un modo similar al modelo de las nubes descentralizadas, trayendo tanto el contenido como la computación más cerca hacia los usuarios en los extremos de la red. Las nubes comunitarias se basan en el principio de compartir recíprocamente y la mayoría de sus usuarios son movidos por principios altruistas. Sin embargo, como cualquier otra organización humana, estas redes no son inmunes al uso excesivo, al parasitismo, o al bajo-provisionamiento, especialmente en escenarios donde los usuarios pueden estar motivados a competir por recursos escasos. Nos centramos en esta tesis en los mecanismos de regulación de recursos basados en incentivos para derivar formas de aplicación práctica del arbitraje cuando se produce tal contención por recursos limitados. Primero diseñamos estos mecanismos de regulación a nivel local, donde las fuertes relaciones sociales entre los miembros de la comunidad generan confianza y aseguran la adhesión a las políticas del sistema. A continuación, extendemos los mecanismos para comunidades más grandes de usuarios no confiables, donde usuarios racionales pueden ser motivados a desviarse por sus ganancias personales, y desarrollamos un marco distribuido para garantizar confianza en la regulación de recursos. Tales mecanismos ayudan a fomentar la contribución de los miembros de la comunidad, y ayudan a la adopción, la sostenibilidad y el crecimiento del modelo de nube comunitaria.
A Internet e as tecnologias de comunicação têm reduzido os custos para comunidades colaborarem, levando a novos serviços como conteúdo gerado pelos utilizadores e computação social, surgindo também, através de colaboração, infraestruturas construídas colectivamente, tais como redes comunitárias. Enquanto as redes comunitárias focam-se unicamente na partilha de largura de banda, as núvens comunitárias alargam esta partilha para providenciar aplicações de interesse local, implementadas dentro de redes comunitárias através de esforços colaborativos para providenciar infraestruturas em núvem. As núvens comunitárias complementam os tradicionais fornecedores de núvens públicas de larga escala, de forma similar ao modelo de núvens de limite, trazendo tanto conteúdo como computação para mais perto dos utilizadores nos limites da rede. As núvens comunitárias são baseadas no princípio de partilha recíproca e a maioria dos seus utilizadores são movidos por princípios altruístas. Contudo, tal como qualquer outra organização humana, estas redes não são imunes à sobreutilização, parasitismo, ou sub-provisão, especialmente em situações onde os utilizadores possam ter motivações para competir por recursos escassos. Nesta tese focamo-nos nos mecanismos de base para incentivo de regulação de recursos, para derivar formas práticas de implementar arbitragem quando ocorre disputa por recursos limitados. Primeiro projectamos estes mecanismos de regulação ao nível local, onde os laços sociais entre membros da comunidade são mais fortes e implicam confiança, e garantem adesão às políticas do sistema. Em seguida alargamos os mecanismos para comunidades de utilizadores não-confiáveis maiores, onde utilizadores racionais podem estar motivados a desviar-se do comportamento esperado para ganho pessoal, e desenvolvemos uma estrutura distribuída para garantir confiança na regulação de recursos. Tais mecanismos incentivam à contribuição dos membros da comunidade, e ajudando no sentido da adopção, sustentabilidade e crescimento do modelo de núvens comunitárias.
APA, Harvard, Vancouver, ISO, and other styles
33

MIANO, SEBASTIANO. "Rethinking Software Network Data Planes in the Era of Microservices." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2841176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Neuwirth, Sarah Marie [Verfasser], and Ulrich [Akademischer Betreuer] Brüning. "Accelerating Network Communication and I/O in Scientific High Performance Computing Environments / Sarah Marie Neuwirth ; Betreuer: Ulrich Brüning." Heidelberg : Universitätsbibliothek Heidelberg, 2019. http://d-nb.info/1177045133/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Neuwirth, Sarah [Verfasser], and Ulrich [Akademischer Betreuer] Brüning. "Accelerating Network Communication and I/O in Scientific High Performance Computing Environments / Sarah Marie Neuwirth ; Betreuer: Ulrich Brüning." Heidelberg : Universitätsbibliothek Heidelberg, 2019. http://d-nb.info/1177045133/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mekbungwan, Preechai. "In-network Computation for IoT in Named Data Networking." Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS151.

Full text
Abstract:
ActiveNDN est proposé pour étendre le réseau de données nommé (NDN) avec des calculs dans le réseau en intégrant des fonctions dans une entité supplémentaire appelée bibliothèque de fonctions, qui est connectée au transitaire NDN dans chaque routeur NDN. Les appels de fonctions peuvent être exprimés comme une partie des noms d'intérêt avec des préfixes de noms propres pour le routage, les résultats du calcul étant renvoyés sous forme de paquets de données NDN, créant ainsi un réseau ActiveNDN. Notre objectif principal est d'effectuer des calculs distribués robustes, tels que l'analyse et le filtrage de données brutes en temps réel, aussi près que possible des capteurs dans un environnement avec une connectivité Internet intermittente et des nœuds IoT calculables aux ressources limitées. Dans cette thèse, la conception d'ActiveNDN est illustrée avec un petit réseau prototype comme preuve de concept. Des expériences de simulation approfondies ont été menées pour étudier les performances et l'efficacité d'ActiveNDN dans des réseaux IoT sans fil à grande échelle. La capacité de traitement en temps réel de l'ActiveNDN est également comparée aux approches centralisées d'informatique périphérique. Enfin, l'ActiveNDN fait l'objet d'une démonstration sur le banc d'essai du réseau de capteurs sans fil avec des applications du monde réel qui fournissent des prédictions horaires de PM2,5 suffisamment précises à l'aide d'un modèle de régression linéaire. Il montre la capacité de distribuer la charge de calcul sur de nombreux nœuds, ce qui rend ActiveNDN adapté aux déploiements IoT à grande échelle
ActiveNDN is proposed to extend Named Data Networking (NDN) with in-network computation by embedding functions in an additional entity called Function Library, which is connected to the NDN forwarder in each NDN router. Function calls can be expressed as part of the Interest names with proper name prefixes for routing, with the results of the computation returned as NDN Data packets, creating an ActiveNDN network. Our main focus is on performing robust distributed computation, such as analysing and filtering raw data in real-time, as close as possible to sensors in an environment with intermittent Internet connectivity and resource-constrained computable IoT nodes. In this thesis, the design of ActiveNDN is illustrated with a small prototype network as a proof of concept. Extensive simulation experiments were conducted to investigate the performance and effectiveness of ActiveNDN in large-scale wireless IoT networks. The real-time processing capability of ActiveNDN is also compared with centralized edge computing approaches. Finally, the ActiveNDN is demonstrated over the wireless sensor network testbed with real-world applications that provide sufficiently accurate hourly PM2.5 predictions using linear regression model. It shows the ability to distribute the computational load across many nodes, which makes ActiveNDN suitable for large-scale IoT deployments
APA, Harvard, Vancouver, ISO, and other styles
37

Neumann, Nicholas Gerard. "Two algorithms for leader election and network size estimation in mobile ad hoc networks." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1321.

Full text
Abstract:
We develop two algorithms for important problems in mobile ad hoc networks (MANETs). A MANET is a collection of mobile processors (“nodes”) which communicate via message passing over wireless links. Each node can communicate directly with other nodes within a specified transmission radius; other communication is accomplished via message relay. Communication links may go up and down in a MANET (as nodes move toward or away from each other); thus, the MANET can consist of multiple connected components, and connected components can split and merge over time. We first present a deterministic leader election algorithm for asynchronous MANETs along with a correctness proof for it. Our work involves substantial modifications of an existing algorithm and its proof, and we adapt the existing algorithm to the asynchronous environment. Our algorithm’s running time and message complexity compare favorably with existing algorithms for leader election in MANETs. Second, many algorithms for MANETs require or can benefit from knowledge about the size of the network in terms of the number of processors. As such, we present an algorithm to approximately determine the size of a MANET. While the algorithm’s approximations of network size are only rough ones, the algorithm has the important qualities of requiring little communication overhead and being tolerant of link failures.
APA, Harvard, Vancouver, ISO, and other styles
38

Mosezon, Davide. "Virtualizzazione e Cloud Computing: integrazione di meccanismi di sincronizzazione di server virtuali su una wide area network in OpenNebula." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/4486/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jose, Matthews. "In-network real-value computation on programmable switches." Electronic Thesis or Diss., Université de Lorraine, 2023. http://docnum.univ-lorraine.fr/ulprive/DDOC_T_2023_0057_JOSE.pdf.

Full text
Abstract:
L'arrivée des switchs ASIC programmables de nouvelle génération a obligé la communauté des réseaux à repenser le fonctionnement des réseaux. La possibilité de reconfigurer la logique de traitement des paquets par le plan de transfert de données sans modifier le matériel sous-jacent et l'introduction de primitives de mémoire à état ont suscité un regain d'intérêt pour les cas d'utilisation qui peuvent être déchargés sur le plan de transfert de données. Cependant, les commutateurs programmables ne prennent toujours pas en charge les calculs à valeur réelle et obligent à utiliser des serveurs externes ou des boîtes intermédiaires pour effectuer ces opérations. Afin de réaliser pleinement la capacité de traitement en réseau, nos contributions proposent d'ajouter le support des opérations à valeur réelle sur le commutateur. Pour ce faire, nous utilisons des tables de consultation mathématiques pour construire des pipelines permettant de calculer des fonctions à valeur réelle. Nous commençons par développer des procédures pour calculer des opérations élémentaires de base, en gardant à l'esprit les contraintes et les limitations d'un commutateur programmable. Ces procédures sont une combinaison de tables de consultation et d'opérations natives fournies par le commutateur. Une fonction donnée est décomposée en une représentation qui met en évidence les opérations élémentaires qui la composent et les dépendances entre elles. Un pipeline est construit en assemblant les procédures prédéfinies pour chaque opération élémentaire sur la base de la représentation. Plusieurs techniques de réduction et d'optimisation des ressources sont également appliquées avant que le pipeline final ne soit déployé sur le commutateur. Ce processus a été étendu à plusieurs commutateurs du réseau, ce qui permet de déployer des fonctions encore plus importantes sur le commutateur. Le projet a été le premier à étudier un cadre générique pour la construction de pipelines pour le calcul à valeur réelle. Notre prototype sur le commutateur Barefoot Tofino montre l'efficacité de notre système pour le calcul en réseau de différents types d'opérations et son application pour les modèles de régression logistique en réseau utilisés pour les problèmes de classification et les fonctions de séries chronologiques comme ARIMA pour la détection des DDoS. Nos évaluations montrent qu'il est possible d'atteindre une erreur relative inférieure à 5%, voire 1%, avec une faible quantité de ressources, ce qui en fait une approche viable pour prendre en charge des fonctions et des algorithmes complexes
The advent of new-generation programmable switch ASICs have compelled the network community to rethink operation of networks. The ability to reconfigure the dataplane packet processing logic without changing the underlying hardware and the introduction of stateful memory primitives have resulted in a surge in interest and use-cases that can be offloaded onto the dataplane. However, programmable switches still do not support real-value computation and forcing the use of external servers or middle boxes to perform these operations. To fully realize the capability of in-network processing, our contributions propose to add support for real-value operations on the switch. This is achieved by leveraging mathematical lookup tables for building pipelines to compute a real-value functions. We start by developing procedures for computing basic elementary operations, keeping in mind the constraints and limitations of a programmable switch. These procedures are a combination of lookup tables and native operations provided by the switch. A given function is decomposed into a representation that highlights its constituent elementary operations and the dependencies between them. A pipeline is constructed by stitching together the predefined procedures for each elementary operation based on the representation. Several, reduction and resource optimization techniques are also applied before the final pipeline is deployed onto the switch. This process was further expanded to scale multiple switches in the network, enabling even larger functions to be deployed on the switch. The project was the first to investigate a generic framework for building pipelines for real-value computation. Our prototype on Barefoot Tofino switch shows the efficiency of our system for in-network computation of different types of operations and its application for in-network logistic regression models used for classification problems and time series functions like ARIMA for DDoS detection. Our evaluations show that reaching a relative error below 5% or even 1% is possible with a low amount of resources making it a viable approach to support complex functions and algorithms
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Renzhong. "Executable system architecting using systems modeling language in conjunction with Colored Petri Nets - a demonstration using the GEOSS network centric system." Diss., Rolla, Mo. : University of Missouri-Rolla, 2007. http://scholarsmine.umr.edu/thesis/pdf/Wang_09007dcc803a6d5e.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Missouri--Rolla, 2007.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed November 30, 2007) Includes bibliographical references (p. 199-209).
APA, Harvard, Vancouver, ISO, and other styles
41

Schuller, Walter H. Jr. "Hardware Interfacing in the Broadcast Industry Using Simple Network Management Protocol (SNMP)." UNF Digital Commons, 1997. http://digitalcommons.unf.edu/etd/339.

Full text
Abstract:
Communication between various broadcast equipment plays a major role in the daily operation of a typical broadcast facility. For example, editing equipment must interface with tape machines, production switchers must interface with font generators and video effect equipment, and satellite ground controllers must interface with satellite dishes and receivers. Communication between these devices may be a simple hardware handshake configuration or a more elaborate software based communications via serial or parallel interfacing. This thesis concerns itself with the software interfacing needed to allow various dissimilar types of equipment to communicate, and therefore, interface with each other. The use of Simple Network Management Protocol (SNMP) in a non-typical manner for the purpose of hardware interfacing is the basis for this work.
APA, Harvard, Vancouver, ISO, and other styles
42

Amarasinghe, Heli. "Network Resource Management in Infrastructure-as-a-Service Clouds." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39141.

Full text
Abstract:
Cloud Infrastructure-as-a-Service (IaaS) is a form of utility computing which has emerged with the recent innovations in the service computing and data communication technologies. Regardless of the fact that IaaS is attractive for application service providers, satisfying user requests while ensuring cloud operational objectives is a complicated task that raises several resource management challenges. Among these challenges, limited controllability over network services delivered to cloud consumers is prominent in single datacenter cloud environments. In addition, the lack of seamless service migration and optimization, poor infrastructure utilization, and unavailability of efficient fault tolerant techniques are noteworthy challenges in geographically distributed datacenter clouds. Initially in this thesis, a datacenter resource management framework is presented to address the challenge of limited controllability over cloud network traffic. The proposed framework integrates network virtualization functionalities offered by software defined networking (SDN) into cloud ecosystem. To provide rich traffic control features to IaaS consumers, control plane virtualization capabilities offered by SDN have been employed. Secondly, a quality of service (QoS) aware seamless service migration and optimization framework has been proposed in the context of geo-distributed datacenters. Focus has been given to a mobile end-user scenario where frequent cloud service migrations are required to mitigate QoS violations. Finally, an SDN-based dynamic fault restoration scheme and a shared backup-based fault protection scheme have been proposed. The fault restoration has been achieved by introducing QoS-aware reactive and shared risk link group-aware proactive path computation algorithms. Shared backup protection has been achieved by optimizing virtual and backup link embedding through a novel integer linear programming approach. The proposed solutions significantly improve bandwidth utilization in inter-datacenter networks while recovering from substrate link failures.
APA, Harvard, Vancouver, ISO, and other styles
43

Sharma, Rahil. "Shared and distributed memory parallel algorithms to solve big data problems in biological, social network and spatial domain applications." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2277.

Full text
Abstract:
Big data refers to information which cannot be processed and analyzed using traditional approaches and tools, due to 4 V's - sheer Volume, Velocity at which data is received and processed, and data Variety and Veracity. Today massive volumes of data originate in domains such as geospatial analysis, biological and social networks, etc. Hence, scalable algorithms for effcient processing of this massive data is a signicant challenge in the field of computer science. One way to achieve such effcient and scalable algorithms is by using shared & distributed memory parallel programming models. In this thesis, we present a variety of such algorithms to solve problems in various above mentioned domains. We solve five problems that fall into two categories. The first group of problems deals with the issue of community detection. Detecting communities in real world networks is of great importance because they consist of patterns that can be viewed as independent components, each of which has distinct features and can be detected based upon network structure. For example, communities in social networks can help target users for marketing purposes, provide user recommendations to connect with and join communities or forums, etc. We develop a novel sequential algorithm to accurately detect community structures in biological protein-protein interaction networks, where a community corresponds with a functional module of proteins. Generally, such sequential algorithms are computationally expensive, which makes them impractical to use for large real world networks. To address this limitation, we develop a new highly scalable Symmetric Multiprocessing (SMP) based parallel algorithm to detect high quality communities in large subsections of social networks like Facebook and Amazon. Due to the SMP architecture, however, our algorithm cannot process networks whose size is greater than the size of the RAM of a single machine. With the increasing size of social networks, community detection has become even more difficult, since network size can reach up to hundreds of millions of vertices and edges. Processing such massive networks requires several hundred gigabytes of RAM, which is only possible by adopting distributed infrastructure. To address this, we develop a novel hybrid (shared + distributed memory) parallel algorithm to efficiently detect high quality communities in massive Twitter and .uk domain networks. The second group of problems deals with the issue of effciently processing spatial Light Detection and Ranging (LiDAR) data. LiDAR data is widely used in forest and agricultural crop studies, landscape classification, 3D urban modeling, etc. Technological advancements in building LiDAR sensors have enabled highly accurate and dense LiDAR point clouds resulting in massive data volumes, which pose computing issues with processing and storage. We develop the first published landscape driven data reduction algorithm, which uses the slope-map of the terrain as a filter to reduce the data without sacrificing its accuracy. Our algorithm is highly scalable and adopts shared memory based parallel architecture. We also develop a parallel interpolation technique that is used to generate highly accurate continuous terrains, i.e. Digital Elevation Models (DEMs), from discrete LiDAR point clouds.
APA, Harvard, Vancouver, ISO, and other styles
44

Gorgues, Alonso Miguel. "Improving Network-on-Chip Performance in Multi-Core Systems." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/107336.

Full text
Abstract:
La red en el chip (NoC) se han convertido en el elemento clave para la comunicación eficiente entre los núcleos dentro de los chip multiprocesador (CMP). Tanto el uso de aplicaciones paralelas en los CMPs como el incremento de la cantidad de memoria necesitada por las aplicaciones, ha impulsado que la red de comunicación gane una mayor importancia. La NoC es la encargada de transportar toda la información requerida por los núcleos. Además, el incremento en el número de núcleos en los CMPs impulsa las NoC a ser diseñadas de forma escalable, pero al mismo tiempo sin que esto afecte a las prestaciones de la red (latencia y productividad). Por tanto, el diseño de la red en el chip se convierte en crítico. Esta tesis presenta diferentes propuestas que atacan el problema de la mejora de las prestaciones de la red en tres escenarios distintos. Los tres escenarios en los que se centran nuestras propuestas son: 1) NoCs que implementan un algoritmo de encaminamiento adaptativo, 2) escenarios con necesidad de tiempos de acceso a memoria bajos y 3) sistemas con previsión de seguridad a nivel de aplicación. Las primeras propuestas se centran en el aumento de la productividad en la red utilizando algoritmos de encaminamiento adaptativos mediante un mejor uso de los recursos de la red, primera propuesta SUR, y evitando que se ramifique la congestión cuando existe tráfico intenso hacia un único destinatario, segunda propuesta EPC. La tercera y principal contribución de esta tesis se centra la problemática de reducir el tiempo de acceso a memoria. PROSA, mediante un diseño híbrido de conmutación de paquete y conmuntación de circuito, permite reducir la latencia de la red aprovechando la latencia de acceso a memoria para establecer circuitos. De esta forma cuando la información llega a la NoC, esta es servida sin retardos. Por último, la propuesta Token Based TDM se centra en el escenario con redes de interconexión seguras. En este tipo de NoC las aplicaciones esta divididas en dominios y la red debe garantizar que no existen interferencias entre los diferentes dominios para evitar de este modo la intrusión de posibles aplicaciones maliciosas. Token-based TDM permite el aislamiento de los dominios sin tener impacto en el diseño de los conmutados de la NoC. Los resultados obtenidos demuestran como estas propuestas han servido para mejorar las prestaciones de la red en los diferentes escenarios. La implementación y la simulación de las propuestas muestra como mediante el balanceado de la utilización de los recursos de la red, los CMPs con algoritmos de encaminamiento adaptativos son capaces de aumentar el tráfico soportado por la red. Además, el uso de un filtro para limitar el encaminamiento adaptativo en situaciones de congestión previene a los mensajes de la ramificación de la congestión a lo largo de la red. Por otra parte, los resultados demuestran que el uso combinado de la conmutación de paquete y conmutación de circuito reduce muy significativa de la latencia de red acceso a memoria, contribuyendo a una reducción significativa del tiempo de ejecución de la aplicación. Por último, Token-Based TDM incrementa las prestaciones de las redes TDM debido a su alta flexibilidad dado que no requiere ninguna modificación en la red para soportar una cantidad diferente de dominios mientras mejora la latencia de la red y mantiene un aislamiento perfecto entre los tráficos de las aplicaciones.
The Network on Chip (NoC) has become the key element for an efficient communication between cores within the multiprocessor chip (CMP). The use of parallel applications in CMPs and the increase in the amount of memory needed by applications have pushed the network communication to gain importance. The NoC is in charge of transporting all the data needed by the processors cores. Moreover, the increase in the number of cores pushes the NoCs to be designed in a scalable way, but at the same time, without affecting network performance (latency and productivity). Thus, network-on-chip design becomes critical. This thesis presents different proposals that attack the problem of improving the network performance in three different scenarios. The three scenarios in which our proposals are focused are: 1) NoCs with an adaptive routing algorithm, 2) scenarios with low memory access time needs, and 3) high-assurance NoCs. The first proposals focus on increasing network throughput with adaptive routing algorithms via the improvement of the network resources utilization, the first proposal SUR, and avoiding congestion spreading when an intense traffic to a single destination occurs, second proposal ECP. The third one and main contribution of this thesis focuses on the problem of reducing memory access latency. PROSA, through a hybrid circuit-packet switching architecture design, reduces the network latency by getting benefit of the memory access latency slack and to establishing circuits during that delay. In this way the information when arrives to the NoC is served without any delay. Finally, the proposal Token-Based TDM focuses on the scenario with high assurance networks on chips. In this type of NoCs the applications are divided into domains and the network must guarantee that there are no interferences between the different domains avoiding this way intrusion of possible malicious applications. Token-based TDM allows domain isolation with no design impact on NoC routers. The results show how these proposals improve the performance of the network in each different scenario. The implementation and simulations of the proposals show the efficient use of network resources in CMPs with adaptive routing algorithms which leads to an increasement of the injected traffic supported by the network. In addition, using a filter to limit the adaptivity of the routing algorithm under congested situations prevents messages from spreading the congestion along the network. On the other hand, the results show that the combined use of circuit and packet switching reduces the memory access latency significantly, contributing to a significant reduction in application execution time. Finally, Token-Based TDM increases network performance of TDM networks due to its high flexibility and efficient arbitration. Moreover, Token-Based TDM does not require any modification in the network to support a different number of domains while improving latency and keeping a strong traffic isolation from different domains.
La xarxa en el xip (NoC) s'ha convertit en un element clau per a una comunicació eficient entre els diferents nuclis dins d'un xip multiprocessador (CMP). Tant la utilització d'aplicacions paral·leles en el CMP com l'increment de la quantitat de memòria necessitada per les aplicacions, hi ha produït que la xarxa de comunicació tinga una major importància. La NoC és l'encarregada de transportar tota la informació necessària pels nuclis. A més, l'increment del nombre de nuclis dins del CMP fa que la NoC haja de ser dissenyada d'una forma escalable, sense que afecte les prestacions de la xarxa (latència i productivitat). Per tant, el disseny de la xarxa en el xip es converteix crític. Aquesta tesi presenta diferents propostes que ataquen el problema de la millora de les prestacions de la xarxa en tres escenaris distints. Els tres escenaris en els quals se centren les nostres propostes són: 1) NoCs que implementen un algoritme d'encaminament adaptatiu, 2) escenaris amb necessitat de temps baix d'accés a memòria i 3) sistemes amb previsió de seguretat en l'àmbit d'aplicació. Les primeres propostes se centren en l'augment de la productivitat en la xarxa utilitzant algoritmes d'encaminament adaptatiu mitjançant una millor utilització dels recursos de la xarxa, primera proposta SUR, i evitant que es ramifique la congestió quan existeix un trànsit intens cap a un únic destinatari, segona proposta EPC. La tercera i principal contribució d'aquesta tesi es basa en la problemàtica de reduir el temps d'accés a memòria. PROSA, mitjançant un disseny híbrid de commutació de paquet i commutació de circuit, redueix la latència de la xarxa aprofitant la latència d'accés a memòria i establint els circuits durant aquesta latència. D'aquesta forma la informació quan arriba a la NoC pot ser enviada sense cap retràs. Per últim, la proposta Token-based TDM se centra en l'escenari amb xarxes d'interconnexió d'alta seguretat. En aquest tipus de NoC les aplicacions estan dividides en dominis i la xarxa deu garantir que no existeixen interferències entre els diferents dominis per a evitar d'aquesta forma la intrusió de possibles aplicacions malicioses. Token-based TDM permet l'aïllament dels dominis sense tindre impacte en el disseny dels encaminadors de la NoC. Els resultats demostren com aquestes propostes han servit per a millorar les prestacions de la xarxa en els diferents escenaris. La seua implementació i simulació demostra com mitjançant el balancejat de la utilització dels recursos de la xarxa, els CMP amb algoritmes d'encaminament adaptatiu són capaços d'augmentar el trànsit suportat per la xarxa. A més, l'ús d'un filtre per a limitar l'adaptabilitat de l'encaminament adaptatiu en situacions de congestió permet prevenir els missatges de la congestió al llarg de la xarxa. Per altra banda, els resultats demostren que l'ús combinat de la commutació de paquet i commutació de circuit redueix molt significativament de la latència d'accés a memòria, contribuint en una reducció significativa del temps d'execució de l'aplicació. Per últim, Token-based TDM incrementa les prestacions de les xarxes TDM debut a la seua alta flexibilitat donat que no requereix cap modificació en la xarxa per a suportar una quantitat diferent de dominis mentre millora la latència de la xarxa i mantén un aïllament perfecte entre els trànsits de les aplicacions.
Gorgues Alonso, M. (2018). Improving Network-on-Chip Performance in Multi-Core Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/107336
TESIS
APA, Harvard, Vancouver, ISO, and other styles
45

Maltoni, Pietro. "Progetto di un acceleratore hardware per layer di convoluzioni depthwise in applicazioni di Deep Neural Network." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24205/.

Full text
Abstract:
Il progressivo sviluppo tecnologico e il costante monitoraggio, controllo e analisi della realtà circostante ha condotto allo sviluppo di dispositivi IoT sempre più performanti, per questo si è iniziato a parlare di Edge Computing. In questi dispositivi sono presenti le risorse per elaborare i dati dai sensori direttamente in locale. Questa tecnologia si adatta bene alle CNN, reti neurali per l'analisi e il riconoscimento di immagini. Le Separable Convolution rappresentano una nuova frontiera perchè permettono di diminuire in modo massiccio la quantità di operazioni da eseguire su tensori di dati dividendo la convoluzione in due parti: una Depthwise e una Pointwise. Tutto questo porta a risultati molto affidabili in termini di accuratezza e velocità ma è sempre centrale il problema legato al consumo di potenza in quanto i dispositivi si affidano solamente ad una batteria interna. Per questo è necessario avere un buon trade-off tra consumi e capacità computazionale. Per rispondere a questa sfida tecnologica lo stato dell'arte in questo ambito propone soluzioni diverse, composte da cluster con core ottimizzati e istruzioni dedicate o FPGA. In questa tesi proponiamo un acceleratore hardware sviluppato in PULP orientato al calcolo di layer di convoluzioni Depthwise. Grazie ad una logica HWC dei dati in memoria e al Window Buffer, una finestra che trasla sull'immagine per effettuare le convoluzioni canale per canale è stato possibile sviluppare una architettura del datapath orientata al riuso dei dati; questo porta l’acceleratore ad avere come risultato in uscita uno throughput massimo di 4 pixel per ciclo di clock. Con le performance di 6 GOP/s, un' efficienza energetica di 101 GOP/j e un consumo di potenza nell'ordine dei mW, dati ottenuti attraverso l'integrazione dell'IP all'interno del cluster di Darkside, nuovo chip di ricerca con tecnologia TSCM a 65 nm, l'acceleratore Depthwise si candida ad essere una soluzione ideale per questo tipo di applicazioni.
APA, Harvard, Vancouver, ISO, and other styles
46

Torre, Arranz Roberto [Verfasser], Frank H. P. [Gutachter] Fitzek, Marcos [Gutachter] Katz, and Qi [Gutachter] Zhang. "Agile Mobile Edge Computing and Network-coded Cooperation in 5G / Roberto Torre Arranz ; Gutachter: Frank H. P. Fitzek, Marcos Katz, Qi Zhang." Dresden : Technische Universität Dresden, 2021. http://d-nb.info/1238140572/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Subbiah, Arun. "Design and evaluation of a distributed diagnosis algorithm for arbitrary network topologies in dynamic fault environments." Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/13273.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

VASCIAVEO, ALESSANDRO. "Computational models and algorithms to solve large-scale problems in Network Biology." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2667593.

Full text
Abstract:
The acquisition and measurement of biological data have dramatically changed in the recent years. In fact, High-Throughtput Sequencing (HTS) technologies are now able to capture almost all the "blueprints" that encode for life within a organism’s cell, in a parallel and rapid manner. This newly and cheaper data enabled a whole novel ensemble of approaches to unravel the mechanisms that regulate the processes in living cells. Unfortunately, these different sources of information can easily reach Terabytes of data for one single study. Consequently, in order to integrate and analyze systemically all the different data sources, novel methodologies, algorithms and software are needed to uncover the real benefits of this biological Big Data. Therefore, in my Ph.D studies I investigated those mathematical and Information Theory models that are well suited for the representation of biological phenomena using network concepts applied to large datasets. Moreover, the main idea that has driven my studies has been about making models understandable to elucidate the mechanisms of cell regulation, rather than using the most recent and very powerful deep learning approaches that may solve the problem but give you back a black box hard to dissect - i.e. making better inferences rather than robust predictions. Thus, whenever it was possible I chose simple models over complex ones. Ultimately, new models and software tools have been the results of these studies, and either research advances about new conceptual frameworks or their implementation have been published on several international conferences and journals. A full publication list is in the conclusions.
APA, Harvard, Vancouver, ISO, and other styles
49

Durbeck, Lisa J. "Global Energy Conservation in Large Data Networks." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/78291.

Full text
Abstract:
Seven to ten percent of the energy used globally goes towards powering information and communications technology (ICT): the global data- and telecommunications network, the private and commercial datacenters it supports, and the 19 billion electronic devices around the globe it interconnects, through which we communicate, and access and produce information. As bandwidth and data rates increase, so does the volume of traffic, as well as the absolute amount of new information digitized and uploaded onto the Net and into the cloud each second. Words like gigabit and terabyte were needless fifteen years ago in the public arena; now, they are common phrases. As people use their networked devices to do more, to access more, to send more, and to connect more, they use more energy--not only in their own devices, but also throughout the ICT. While there are many endeavors focused on individual low-power devices, few are examining broad strategies that cross the many boundaries of separate concerns within the ICT; also, few are assessing the impact of specific strategies on the global energy supply: at a global scale. This work examines the energy savings of several such strategies; it also assesses their efficacy in reducing energy consumption, both within specific networks and within the larger ICT. All of these strategies save energy by reducing the work done by the system as a whole on behalf of a single user, often by exploiting commonalities among what many users around the globe are also doing to amortize the costs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Tsunoda, Kaoru. "A study of user level scheduling and software caching in the educational interactive system." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography