Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Edge IoT.

Dissertationen zum Thema „Edge IoT“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Edge IoT" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Stiefel, Maximilian. „IOT CONNECTIVITY WITH EDGE COMPUTING“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-372094.

Der volle Inhalt der Quelle
Annotation:
Billions of Internet of Things (IoT) devices will be connected in the next decades. Most devices are for Massive Machine Type Communication (MMTC) applications. This requires the IoT infrastructure to be extremely efficient and scalable (like today’s Internet) to support more and more devices connected to the network over time. The cost per connection needs to be very low (like today’s Web services). The current network design with dedicated HW-based base stations (or IoT gateways) may be too costly. Furthermore, there is a vast amount of IoT radio standards, such as Narrowband-IoT (NB-IoT), LTE-M, BLE, ZigBee, Sigfox, LoRa, to give some examples, which all need to be implemented if they are supposed to be supported. The current approach requires to deploy parallel networks with dedicated base stations for different standards in one place. This further increases network costs. Cloud Radio Access Network (RAN) (c-RAN) has been proposed to centralize and cloudify baseband processing in a cloud infrastructure based on GPPs, which can potentially increase network flexibility and reduce the network Total Cost of Ownership (TCO) significantly. It can also be beneficiary for network performance by increased coordination possibilities. Nowadays, c-RAN still is on a concept level, because it is deemed difficult to implement due to complexity and reliability issues, e.g. for 4G/5G which requires sophisticated processing capabilities. The terminology of C-RAN today refers more to Centralized-RAN based on Digital Signal Processing (DSP) microcontrollers and ASICs, instead of c-RAN. However, the MMTC technologies are usually narrowband and designed with low complexity (considering cost of User Equipment (UE), power consumption, battery life time, etc.). Therefore, they are rather suitable for cloud implementation. Latency may be another issue for c-RAN. However, most of the MMTC applications are based on best-effort strategies and delay-tolerance. Therefore, c-RAN offers a promising solution to deliver the required efficiency and scalability for MMTC services. This master thesis is part of an effort to explore the possibilities, to increase the understanding and to gain hands-on experiences of IoT c-RAN implementation at the edge. It focuses on the NB-IoT downlink (DL) Physical (PHY) implementation as one example. However, IEEE 802.15.4 (PHY layer of e.g. Zigbee) has been integrated into the system within a collaboration between Ericsson and RISE SICS. This also shows, that c-RAN technology is able to unite multiple radio interfaces in one system leveraging Software (SW). In this study, we built a Software Defined Radio (SDR) testbed based on GNU Radio. The USRP B210 is the Hardware (HW) tool to test the implementation. Key components of the NB-IoT DL have been implemented. Orthogonal Frequency-Division Multiplex (OFDM) transmitter and receiver follow the NB-IoT numerology and implement algorithms for signal generation, time and frequency synchronization, as well as equalization and demodulation. The convolutional code of the Voyager missions with a coding rate R = 12 is used for performance evaluation. Different baseband modules have been tested and verified. Investigations have been carried out on the topic of latency. The measurements reveal a latency, which is higher than expected. Most likely, this is due to the large buffers underlying the GNU Radio scheduler in combination with the low speed of NB-IoT. The end-to-end system has been evaluated by field measurements (Signal-to-Noise Ratio (SNR), Bit Error Rate (BER), Packet Error Rate (PER)) conducted in an Ericsson office environment. With no Line-Of-Sight (LOS), the implemented system has a reach of >= 65 m (from the office lab on floor 4 to the other end of the corridor where GFTB ER NAP NIT Fronhaul Technologies is located) with only 0.5 % PER and a SNR of 15.9 dB. In this work, system and SW design of the testbed and implementation are presented, as well as the hands-on experiences. The testbed is ready for human interaction with a fascinating Telegram bot live demo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Huang, Zhenqiu. „Progression and Edge Intelligence Framework for IoT Systems“. Thesis, University of California, Irvine, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10168486.

Der volle Inhalt der Quelle
Annotation:

This thesis studies the issues of building and managing future Internet of Things (IoT) systems. IoT systems consist of distributed components with services for sensing, processing, and controlling through devices deployed in our living environment as part of the global cyber-physical ecosystem.

Systems with perpetually running IoT devices may use a lot of energy. One challenge is implementing good management policies for energy saving. In addition, a large scale of devices may be deployed in wide geographical areas through low bandwidth wireless communication networks. This brings the challenge of congfiuring a large number of duplicated applications with low latency in a scalable manner. Finally, intelligent IoT applications, such as occupancy prediction and activity recognition, depend on analyzing user and event patterns from historical data. In order to achieve real-time interaction between humans and things, reliable yet real-time analytic support should be included to leverage the interplay and complementary roles of edge and cloud computing.

In this dissertation, I address the above issues from the service oriented point of view. Service oriented architecture (SOA) provides the integration and management flexibility using the abstraction of services deployed on devices. We have designed the WuKong IoT middleware to facilitate connectivity, deployment, and run-time management of IoT applications.

For energy efficient mapping, this thesis presents an energy saving methodology for co- locating several services on the same physical device in order to reduce the computing and communication energy. In a multi-hop network, the service co-location problem is formulated as a quadratic programming problem. I propose a reduction method that reduces it to the integer programming problem. In a single hop network, the service co-location problem can be modeled as the Maximum Weighted Independent Set (MWIS) problem. I design algorithm to transform a service flow to a co-location graph. Then, known heuristic algorithms to find the maximum independent set, which is the basis for making service co-location decisions, are applied to the co-location graph.

For low latency scalable deployment, I propose a region-based hierarchical management structure. A congestion zone that covers multiple regions is identified. The problem of deploying a large number of copies of a flow-based program (FBP) in a congestion zone is modeled as a network traffic congestion problem. Then, the problem of mapping in a congestion zone is modeled as an Integer Quadratic Constrained Programming (IQCP) problem, which is proved to be a NP-hard problem. Given that, an approximation algorithm based on LP relaxation and an efficient service relocating heuristic algorithm are designed for reducing the computation complexity. For each congestion zone, the algorithm will perform global optimized mapping for multiple regions, and then request multiple deployment delegators for reprogramming individual devices.

Finally, with the growing adoption of IoT applications, dedicated and single-purpose devices are giving way to smart, adaptive devices with rich capabilities using a platform or API, collecting and analyzing data, and making their own decisions. To facilitate building intelligent applications in IoT, I have implemented the edge framework for supporting reliable streaming analytics on edge devices. In addition, a progression framework is built to achieve the self-management capability of applications in IoT. A progressive architecture and a programming paradigm for bridging the service oriented application with the power of big data on the cloud are designed in the framework. In this thesis, I present the detailed design of the progression framework, which incorporates the above features for building scalable management of IoT systems through a flexible middleware.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Marchioni, Alex <1989&gt. „Algorithms and Systems for IoT and Edge Computing“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10084/1/marchioni_alex_tesi.pdf.

Der volle Inhalt der Quelle
Annotation:
The idea of distributing the signal processing along the path that starts with the acquisition and ends with the final application has given light to the Internet of Things and Edge Computing, which have demonstrated several advantages in terms of scalability, costs, and reliability. In this dissertation, we focus on designing and implementing algorithms and systems that allow performing a complex task on devices with limited resources. Firstly, we assess the trade-off between compression and anomaly detection from both a theoretical and a practical point of view. Information theory provides the rate-distortion analysis that is extended to consider how information content is processed for detection purposes. Considering an actual Structural Health Monitoring application, two corner cases are analysed: detection in high distortion based on a feature extraction method and detection with low distortion based on Principal Component Analysis. Secondly, we focus on streaming methods for Subspace Analysis. In this context, we revise and study state-of-the-art methods to target devices with limited computational resources. We also consider a real case of deployment of an algorithm for streaming Principal Component Analysis for signal compression in a Structural Health Monitoring application, discussing the trade-off between the possible implementation strategies. Finally, we focus on an alternative compression framework suited for low-end devices that is Compressed Sensing. We propose a different decoding approach that splits the recovery problem into two stages and effectively adopts a deep neural network and basic linear algebra to reconstruct biomedical signals. This novel approach outperforms the state-of-the-art in terms of quality of reconstruction and requires lower computational resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Antonini, Mattia. „From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Der volle Inhalt der Quelle
Annotation:
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Antonini, Mattia. „From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Der volle Inhalt der Quelle
Annotation:
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Piscaglia, Daniele. „Supporto e Infrastrutture DevOps per Microservizi IoT su Edge Gateway“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Den vollen Inhalt der Quelle finden
Annotation:
Progetto svolto durante il tirocinio presso l'azienda Bonfiglioli Riduttori che descrive il processo di modifica di una soluzione di predictive maintenance esistente. La soluzione che coinvolge sensori IoT per la raccolta dati e Edge gateway per analisi e processamento, è stata rivisitata in funzione di importanti meccaniche di amministrazione e manutenzione. Il focus principale della tesi è su come questi sistemi distribuiti richiedano un approccio differente alla distribuzione dei microservizi e alla gestione dei dispositivi e come l'utilizzo delle varie tipologie di piattaforme IoT possano ridurre il carico sulle spalle degli sviluppatori.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Broumas, Ioannis. „Design of Cellular and GNSS Antenna for IoT Edge Device“. Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39239.

Der volle Inhalt der Quelle
Annotation:
Antennas are one of the most sensitive elements in any wireless communication equipment. Designing small-profile, multiband and wideband internal antennas with a simple structure has become a necessary challenge. In this thesis, two planar antennas are designed, simulated and implemented on an effort to cover the LTE-M1 and NB-IoT radio frequencies. The cellular antenna is designed to receive and transmit data over the eight-band LTE700/GSM/UMTS, and the GNSS antenna is designed to receive signal from the global positioning system and global navigation systems, GPS (USA) and GLONASS. The antennas are suitable for direct print on the system circuit board of a device. Related theory and research work are discussed and referenced, providing a strong configuration for future use. Recommendations and suggestions on future work are also discussed. The proposed antenna system is more than promising and with further adjustments and refinement can lead to a fully working solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ashouri, Majid. „Towards Supporting IoT System Designers in Edge Computing Deployment Decisions“. Licentiate thesis, Malmö universitet, Malmö högskola, Institutionen för datavetenskap och medieteknik (DVMT), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-37068.

Der volle Inhalt der Quelle
Annotation:
The rapidly evolving Internet of Things (IoT) systems demands addressing new requirements. This particularly needs efficient deployment of IoT systems to meet the quality requirements such as latency, energy consumption, privacy, and bandwidth utilization. The increasing availability of computational resources close to the edge has prompted the idea of using these for distributed computing and storage, known as edge computing. Edge computing may help and complement cloud computing to facilitate deployment of IoT systems and improve their quality. However, deciding where to deploy the various application components is not a straightforward task, and IoT system designer should be supported for the decision. To support the designers, in this thesis we focused on the system qualities, and aimed for three main contributions. First, by reviewing the literature, we identified the relevant and most used qualities and metrics. Moreover, to analyse how computer simulation can be used as a supporting tool, we investigated the edge computing simulators, and in particular the metrics they provide for modeling and analyzing IoT systems in edge computing. Finally, we introduced a method to represent how multiple qualities can be considered in the decision. In particular, we considered distributing Deep Neural Network layers as a use case and raked the deployment options by measuring the relevant metrics via simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rajakaruna, A. (Archana). „Lightweight edge-based networking architecture for low-power IoT devices“. Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201906072483.

Der volle Inhalt der Quelle
Annotation:
Abstract. The involvement of low power Internet of Things (IoT) devices in the Wireless Sensor Networks (WSN) allow enhanced autonomous monitoring capability in many application areas. Recently, the principles of edge computing paradigm have been used to cater onsite processing and managing actions in WSNs. However, WSNs deployed in remote sites require human involvement in data collection process since internet accessibility is still limited to population dense areas. Nowadays, researchers propose UAVs for monitoring applications where human involvement is required frequently. In this thesis work, we introduce an edge-based architecture which create end-to-end secure communication between IoT sensors in a remote WSN and central cloud via UAV, which assist the data collection, processing and managing procedures of the remote WSN. Since power is a limited resource, we propose Bluetooth Low Energy (BLE) as the communication media between UAV and sensors in the WSN, where BLE is considered as an ultra-low power radio access technology. To examine the performance of the system model, we have presented a simulation analysis considering three sensor nodes array types that can realize in the practical environment. The impact of BLE data rate, impact of speed of the UAV, impact of distance between adjacent sensors and impact of data generation rate of the sensor node have been analysed to examine the performance of system. Moreover, to observe the practical functionality of the proposed architecture, prototype implementation is presented using commercially available off-the-shelf devices. The prototype of the system is implemented assuming ideal environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

KOBEISSI, AHMAD. „VERSO IL CONCETTO DI SMART CITY: SOLUZIONI IOT EDGE-CLOUD“. Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/996248.

Der volle Inhalt der Quelle
Annotation:
Since the term was coined by Kevin Ashton in 1999, the Internet of Things (IoT) did not gain considerable popularity until 2010 where it became a strategic priority for governments, companies, and research centers. Despite this large-scale interest, IoT only reached mass markets in 2014 in the form of wearable devices and fitness trackers, home automation, industrial asset monitoring, and smart energy meters. The ‘things’ refer to sensors and other smart devices with the ability to monitor an object’s state, or even control it using actuators. Ashton envisaged that when such sensors and smart devices were on a ubiquitous network – the Internet – they would have far more value. Trending data-centric technologies in the IoT involve security and data governance, infrastructure (edge & cloud analytics), data processing, advanced analytics, and data integrating and messaging. These technologies are supported by cloud computing service models that include three major layers – Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Of the three, IaaS is the foundation while SaaS is the top layer functioning off both PaaS and IaaS. Interestingly enough, although SaaS is normally represented in graphics as the smallest layer of Cloud infrastructure, it is anything but. The IaaS layer of Cloud Computing is comprised of all the hardware needed to make Cloud Computing possible. The PaaS layer of the Cloud is a framework for developers that they can build upon and use to create customized applications. Built on top of both IaaS and PaaS, Software as a Service provides applications, programs, software, and web tools to the public for free or for a price. By the year 2020, trillions of gigabytes of data will be generated through the Internet of Things. This is no doubt difficult to comprehend easily. However, with the growing number of connected devices it is not surprising that by 2020, more than ten billion sensors and devices will be connected to the internet. Furthermore, all of these devices will gather, analyze, share, and transmit data in real-time. Hence, without the data, IoT devices would not hold the functionalities and capabilities which have made them achieve so much worldwide attention. If organizations are not in a position to somehow ingest, process and analyze these data, then it becomes worthless, and the IoT project will be considered a failure. Unlike a traditional IT system, IoT systems are cyber-physical systems involving both humans and machines as end-users. Their interaction forms a complex web of M2M (Machine to Machine) and H2M (Human to Machine) transactions. Right from device firmware, to network interfaces, extending all the way to business logic defined in cloud application and user app, software remains the most critical driver in IoT. Similarly, Edge computing presents great opportunities to achieve ubiquitous computation in the Internet ecosystem. It is proposed to overcome the intrinsic challenges of computing on the cloud side. Edge computing offers to gather more sensory data, reducing the response time, freeing up network bandwidth, and ultimately reducing the workload on the cloud. In the effort to elevate support for technologies that are directed toward IoT in smart cities concept, support for developers and service providers is critical especially regarding fast and feasible deployment of IoT solutions and assets. To that end, I focused during my research on ways and methods to exploit generic IoT solutions; Application Programming Interfaces (APIs) and edge engines. In this book, I present Atmosphere, a novel edge-to-cloud solution for supporting development and deployment by IoT developers and service providers. Atmosphere cloud is a SaaS deployment-ready model, while Atmosphere edge is a lightweight edge engine for IoT device management. Needless to say, testing the various software components is essential to ensure a safe and reliable IoT system. The solutions I contributed to were tested in multiple projects of varying volumes and challenges. In some projects, using the generic concept was straight forward, while in others, where the structure of the IoT data was complicated and restrictions were established by the partners, the integration was challenging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ghaffar, Talha. „Empirical Evaluation of Edge Computing for Smart Building Streaming IoT Applications“. Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/88438.

Der volle Inhalt der Quelle
Annotation:
Smart buildings are one of the most important emerging applications of Internet of Things (IoT). The astronomical growth in IoT devices, data generated from these devices and ubiquitous connectivity have given rise to a new computing paradigm, referred to as "Edge computing", which argues for data analysis to be performed at the "edge" of the IoT infrastructure, near the data source. The development of efficient Edge computing systems must be based on advanced understanding of performance benefits that Edge computing can offer. The goal of this work is to develop this understanding by examining the end-to-end latency and throughput performance characteristics of Smart building streaming IoT applications when deployed at the resource-constrained infrastructure Edge and to compare it against the performance that can be achieved by utilizing Cloud's data-center resources. This work also presents a real-time streaming application to detect and localize the footstep impacts generated by a building's occupant while walking. We characterize this application's performance for Edge and Cloud computing and utilize a hybrid scheme that (1) offers maximum of around 60% and 65% reduced latency compared to Edge and Cloud respectively for similar throughput performance and (2) enables processing of higher ingestion rates by eliminating network bottleneck.
Master of Science
Among the various emerging applications of Internet of Things (IoT) are Smart buildings, that allow us to monitor and manipulate various operating parameters of a building by instrumenting it with sensor and actuator devices (Things). These devices operate continuously and generate unbounded streams of data that needs to be processed at low latency. This data, until recently, has been processed by the IoT applications deployed in the Cloud at the cost of high network latency of accessing Cloud’s resources. However, the increasing availability of IoT devices, ubiquitous connectivity, and exponential growth in the volume of IoT data has given rise to a new computing paradigm, referred to as “Edge computing”. Edge computing argues that IoT data should be analyzed near its source (at the network’s Edge) in order to eliminate high latency of accessing Cloud for data processing. In order to develop efficient Edge computing systems, an in-depth understanding of the trade-offs involved in Edge and Cloud computing paradigms is required. In this work, we seek to understand these trade-offs and the potential benefits of Edge computing. We examine end to-end latency and throughput performance characteristics of Smart building streaming IoT applications by deploying them at the resource-constrained Edge and compare it against the performance that can be achieved by Cloud deployment. We also present a real-time streaming application to detect and localize the footstep impacts generated by a building’s occupant while walking. We characterize this application’s performance for Edge and Cloud computing and utilize a hybrid scheme that (1) offers maximum of around 60% and 65% reduced latency compared to Edge and Cloud respectively for similar throughput performance and (2) enables processing of higher ingestion rates by eliminating network bottleneck.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Eriksson, Fredrik, und Sebastian Grunditz. „Containerizing WebAssembly : Considering WebAssembly Containers on IoT Devices as Edge Solution“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177581.

Der volle Inhalt der Quelle
Annotation:
This paper will explore the speed of execution, memory foot-print and the maturity of WebAssembly Runtimes (WasmRT).For this study, the WasmRT will be Wasmer1and Wasmtime.2Initially, benchmarks were run on a Raspberry Pi 3 model Bto simulate a more hardware capable IoT-device. Tests per-formed on a Raspberry Pi shows that there are many instanceswhere a WasmRT outperforms a similar Docker+C solution.WasmRT has a very clear use case for IoT devices, specifi-cally short jobs, the results from our research will show thatWasmRT can be up to almost 70 times as fast as a similarDocker solution. WasmRT has a very strong use case thatother container solutions can not contend with. This paperwill show how effective a lightweight, portable, and fast Was-merRT can be, but also to highlight its pain points and whenother container solutions may make more sense
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Liu, Sige. „Bandit Learning Enabled Task Offloading and Resource Allocation in Mobile Edge Computing“. Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29719.

Der volle Inhalt der Quelle
Annotation:
The Internet-of-Things (IoT) is envisioned as a promising paradigm for carrying the interconnections of massive devices through various communications protocols. With the rapid development of fifth-generation (5G), IoT has incentivized a large number of new computation-intensive applications and bridges diverse technologies to provide ubiquitous services with intelligence. However, with billions of devices anticipated to be connected in IoT systems in the coming years, IoT devices face a series of challenges from their inherent features. For instance, the IoT devices are usually densely deployed, and the vast data exchange among numerous devices will cause large overheads and communication/computing resource limitations. Integrated with mobile edge computing (MEC), which pushes the computation and storage resources to the edge of the network much closer to the local devices, IoT systems will benefit from a low propagation delay and privacy/security enhancement. Hence, merging MEC and IoT is a new promising paradigm for task offloading and resource allocation in future wireless communications in mobile networks. In this thesis, we introduce different task offloading and resource allocation strategies for IoT devices to efficiently utilize the limited resource, e.g., spectrum, computation, and budget. Bandit learning (BL), a typical online learning approach, offers a promising solution to deal with the communication/computing resource limitation. The inherent idea behind MEC is to design policies to make a better selection for devices or MEC servers. This coincides with the design purpose of BL. This match-in mechanism provides selection policies for better performance, such as lower latency, lower energy consumption, and higher task completion ratio.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Laroui, Mohammed. „Distributed edge computing for enhanced IoT devices and new generation network efficiency“. Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7078.

Der volle Inhalt der Quelle
Annotation:
Dans le cloud computing, les services et les ressources sont centralisés dans des centres de données auxquels l’utilisateur peut accéder à partir de ses appareils connectés. L’infrastructure cloud traditionnelle sera confrontée à une série de défis en raison de la centralisation de calcul, du stockage et de la mise en réseau dans un petit nombre de centres de données, et de la longue distance entre les appareils connectés et les centres de données distants. Pour répondre à ce besoin, l’edge computing s’appuie sur un modèle dans lequel les ressources de calcul sont distribuées dans le edge de réseau selon les besoins, tout en décentralisant le traitement des données du cloud vers le edge autant que possible. Ainsi, il est possible d’avoir rapidement des informations exploitables basées sur des données qui varient dans le temps. Dans cette thèse, nous proposons de nouveaux modèles d’optimisation pour optimiser l’utilisation des ressources dans le edge de réseau pour deux domaines de recherche de l’edge computing, le "service offloading" et "vehicular edge computing". Nous étudions différents cas d’utilisation dans chaque domaine de recherche. Pour les solutions optimales, Premièrement, pour le "service offloading", nous proposons des algorithmes optimaux pour le placement des services dans les serveurs edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) en tenant compte des contraintes de ressources de calcul. De plus, pour "vehicular edge computing", nous proposons des modèles exacts liés à la maximisation de la couverture des véhicules par les taxis et les Unmanned Aerial Vehicle (UAV) pour les applications de streaming vidéo en ligne. De plus, nous proposons un edge- autopilot VNFs offloading dans le edge de réseau pour la conduite autonome. Les résultats de l’évaluation montrent l’efficacité des algorithmes proposés dans les réseaux avec un nombre limité d’appareils en termes de temps, de coût et d’utilisation des ressources. Pour faire face aux réseaux denses avec un nombre élevé d’appareils et des problèmes d’évolutivité, nous proposons des algorithmes à grande échelle qui prennent en charge une énorme quantité d’appareils, de données et de demandes d’utilisateurs. Des algorithmes heuristiques sont proposés pour l’orchestration SFC, couverture maximale des serveurs edge mobiles (véhicules). De plus, les algorithmes d’intelligence artificielle (apprentissage automatique, apprentissage en profondeur et apprentissage par renforcement en profondeur) sont utilisés pour le placement des "5G VNF slices", le placement des "VNF-autopilot" et la navigation autonome des drones. Les résultats numériques donnent de bons résultats par rapport aux algorithmes exacts avec haute efficacité en temps
Traditional cloud infrastructure will face a series of challenges due to the centralization of computing, storage, and networking in a small number of data centers, and the long-distance between connected devices and remote data centers. To meet this challenge, edge computing seems to be a promising possibility that provides resources closer to IoT devices. In the cloud computing model, compute resources and services are often centralized in large data centers that end-users access from the network. This model has an important economic value and more efficient resource-sharing capabilities. New forms of end-user experience such as the Internet of Things require computing resources near to the end-user devices at the network edge. To meet this need, edge computing relies on a model in which computing resources are distributed to the edge of a network as needed, while decentralizing the data processing from the cloud to the edge as possible. Thus, it is possible to quickly have actionable information based on data that varies over time. In this thesis, we propose novel optimization models to optimize the resource utilization at the network edge for two edge computing research directions, service offloading and vehicular edge computing. We study different use cases in each research direction. For the optimal solutions, First, for service offloading we propose optimal algorithms for services placement at the network edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) by taking into account the computing resources constraints. Moreover, for vehicular edge computing, we propose exact models related to maximizing the coverage of vehicles by both Taxis and Unmanned Aerial Vehicle (UAV) for online video streaming applications. In addition, we propose optimal edge-autopilot VNFs offloading at the network edge for autonomous driving. The evaluation results show the efficiency of the proposed algorithms in small-scale networks in terms of time, cost, and resource utilization. To deal with dense networks with a high number of devices and scalability issues, we propose large-scale algorithms that support a huge amount of devices, data, and users requests. Heuristic algorithms are proposed for SFC orchestration, maximum coverage of mobile edge servers (vehicles). Moreover, The artificial intelligence algorithms (machine learning, deep learning, and deep reinforcement learning) are used for 5G VNF slices placement, edge-autopilot VNF placement, and autonomous UAV navigation. The numerical results give good results compared with exact algorithms with high efficiency in terms of time
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Samikwa, Eric. „Flood Prediction System Using IoT and Artificial Neural Networks with Edge Computing“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280299.

Der volle Inhalt der Quelle
Annotation:
Flood disasters affect millions of people across the world by causing severe loss of life and colossal damage to property. Internet of things (IoT) has been applied in areas such as flood prediction, flood monitoring, flood detection, etc. Although IoT technologies cannot stop the occurrence of flood disasters, they are exceptionally valuable apparatus for conveyance of catastrophe readiness and counteractive action data. Advances have been made in flood prediction using artificial neural networks (ANN). Despite the various advancements in flood prediction systems through the use of ANN, there has been less focus on the utilisation of edge computing for improved efficiency and reliability of such systems. In this thesis, a system for short-term flood prediction that uses IoT and ANN, where the prediction computation is carried out on a low power edge device is proposed. The system monitors real-time rainfall and water level sensor data and predicts ahead of time flood water levels using long short-term memory. The system can be deployed on battery power as it uses low power IoT devices and communication technology. The results of evaluating a prototype of the system indicate a good performance in terms of flood prediction accuracy and response time. The application of ANN with edge computing will help improve the efficiency of real-time flood early warning systems by bringing the prediction computation close to where data is collected.
Översvämningar drabbar miljontals människor över hela världen genom att orsaka dödsfall och förstöra egendom. Sakernas Internet (IoT) har använts i områden som översvämnings förutsägelse, översvämnings övervakning, översvämning upptäckt, etc. Även om IoT-teknologier inte kan stoppa förekomsten av översvämningar, så är de mycket användbara när det kommer till transport av katastrofberedskap och motverkande handlingsdata. Utveckling har skett när det kommer till att förutspå översvämningar med hjälp av artificiella neuronnät (ANN). Trots de olika framstegen inom system för att förutspå översvämningar genom ANN, så har det varit mindre fokus på användningen av edge computing vilket skulle kunna förbättra effektivitet och tillförlitlighet. I detta examensarbete föreslås ett system för kortsiktig översvämningsförutsägelse genom IoT och ANN, där gissningsberäkningen utförs över en låg effekt edge enhet. Systemet övervakar sensordata från regn och vattennivå i realtid och förutspår översvämningsvattennivåer i förtid genom att använda långt korttidsminne. Systemet kan köras på batteri eftersom det använder låg effekt IoT-enheter och kommunikationsteknik. Resultaten från en utvärdering av en prototyp av systemet indikerar en bra prestanda när det kommer till noggrannhet att förutspå översvämningar och responstid. Användningen av ANN med edge computing kommer att förbättra effektiviteten av tidiga varningssystem för översvämningar i realtid genom att ta gissningsberäkningen närmare till där datan samlas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Garofalo, Angelo <1993&gt. „Flexible Computing Systems For AI Acceleration At The Extreme Edge Of The IoT“. Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10288/1/PhD_Thesis_Angelo_Garofalo_ETIT_34.pdf.

Der volle Inhalt der Quelle
Annotation:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Bassi, Lorenzo. „Orchestration of a MEC-based multi-protocol IoT environment“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24114/.

Der volle Inhalt der Quelle
Annotation:
Nowadays we are witnessing to a continuous increasing of the number of IoT devices that must be configured and supported by modern networks. Considering an industrial environment, there is a huge number of these devices that need to coexist at the same time. Each one of them is using its own communication/transport protocol, and a huge effort needs to be done during the setup of the system. In addition, there are also different kind of architectures that can be used. That’s why the network setup is not so easy in this kind of heterogeneous environment. The answer to all these problems can be found in the emerging cloud and edge computing architectures, allowing new opportunities and challenges. They are capable of enable on-demand deployment of all the IoT services. In this thesis is proposed a Multi-access Edge Computing (MEC) approach to face all the possible multi-protocol scenarios. All the services are transformed into MEC-based services, even if they are running over multiple technological domains. As result, was proved that this kind of solution is effective and can simplify the deployment of IoT services by using some APIs defined by the MEC standard. As above mentioned, one of the most important tasks of these new generation’s networks is to be self-configurable in very low amount of time and this will be the scope of my research. The aim of this thesis is to try to reduce as much as possible the time that a certain network requires to be self-configured in an automatic way considering an Industrial IoT as a Service (IIoTaaS) scenario.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Raffa, Viviana. „Edge/cloud virtualization techniques and resources allocation algorithms for IoT-based smart energy applications“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22864/.

Der volle Inhalt der Quelle
Annotation:
Nowadays, the installation of residential battery energy storage (BES) has increased as a consequence of the decrease in the cost of batteries. The coupling of small-scale energy generation (residential PV) and residential BES promotes the integration of microgrids (MG), i.e., clusters of local energy sources, energy storages, and customers which are represented as a single controllable entity. The operations between multiple grid-connected MGs and the distribution network can be coordinated by controlling the power exchange; however, in order to achieve this level of coordination, a control and communication MG interface should be developed as an add-on DMS (Distribution Management System) functionality to integrate the MG energy scheduling with the network optimal power flow. This thesis proposes an edge-cloud architecture that is able to integrate the microgrid energy scheduling method with the grid constrained power flow, as well as providing tools for controlling and monitoring edge devices. As a specific case study, we consider the problem of determining the energy scheduling (amount extracted/stored from/in batteries) for each prosumer in a microgrid with a certain global objective (e.g. to make a few energy exchanges as possible with the main grid). The results show that, in order to have better optimization of the BES scheduling, it is necessary to evaluate the composition of a microgrid in such a way as to have balanced deficits and surpluses, which can be performed with Machine Learning (ML) techniques based on past production and consumption data for each prosumer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

AGUIARI, DAVIDE. „Exploring Computing Continuum in IoT Systems: Sensing, Communicating and Processing at the Network Edge“. Doctoral thesis, Università degli Studi di Cagliari, 2021. http://hdl.handle.net/11584/311478.

Der volle Inhalt der Quelle
Annotation:
As Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Aguiari, Davide. „Exploring Computing Continuum in IoT Systems : sensing, communicating and processing at the Network Edge“. Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS131.

Der volle Inhalt der Quelle
Annotation:
L'Internet des objets (IoT), ne comprenant à l'origine que quelques dispositifs de détection simple, atteint aujourd’hui 34 milliards d’objets connectés d'ici fin 2020. Ces objets ne peuvent plus être définis comme de simples capteurs de surveillance. Les capacités de l'IoT ont été améliorées ces dernières années tandis-que que les capacités de calcul et de stockage de masse sont devenus des marchandises. Aux débuts de l'IoT, le traitement et le stockage étaient généralement effectués dans le cloud. Les nouvelles architectures IoT sont capables d'exécuter des tâches complexes directement sur l'appareil, permettant ainsi le concept d'un continuum de calcul étendu. Les scénarios critiques et temps réel, comme par exemple la détection de véhicules autonomes, la surveillance de zone ou le sauvetage en cas de catastrophe, nécessitent que l’ensemble des acteurs impliqués soient coordonnés et collaborent sans interaction humaine vers un objectif commun, partageant des données et des ressources, même dans les zones couvertes par des réseaux intermittents. Cela pose de nouveaux problèmes dans les systèmes distribués, la gestion des ressources, l'orchestration des appareils et le traitement des données. Ce travail propose un nouveau cadre de communication et d'orchestration, à savoir le C-Continuum, conçu dans des architectures IoT hétérogènes à travers plusieurs scénarios d'application. Ce travail se concentre pour gérer les ressources sur deux macro-scénarios clés de durabilité : (a) la détection et la sensibilisation à l'environnement, et (b) le soutien à la mobilité électrique. Dans le premier cas, un mécanisme de mesure de la qualité de l'air sur une longue période avec différentes applications à l'échelle mondiale (3 continents et 4 pays) est introduit. Le système a été développé en interne depuis la conception du capteur jusqu'aux opérations de mist-computing effectuées par les nœuds. Dans le deuxième scénario une technique pour transmettre de grandes quantités de données, entre un véhicule en mouvement et un centre de contrôle est proposé. Ces données sont de haute granularité temporelle relatives et permettent conjointement d'allouer des tâches sur demande dans le continuum de calcul
As Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Xia, Chunqiu. „Energy Demand Response Management in Smart Home Environments“. Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/20182.

Der volle Inhalt der Quelle
Annotation:
ENABLING DEMAND RESPONSE ON ENERGY MANAGEMENT IN SMART HOME With the penetration of the Internet of Things (IoT) paradigm into the household scenario, an increasing number of smart appliances have been deployed to improve the comfort of living in the household. At present, most smart home devices are adopting the Cloud-based paradigm. The increasing electricity overhead from these smart appliances, however, has caused issues, as existing home energy management systems are unable to reduce electricity consumption effectively. To address this issue, we propose the use of an Edge-based computing platform with lightweight computing devices. In our experiments, this Edge-based platform has proven to be more energy efficient when compared to the traditional Cloud-based platform. To further reduce energy tariffs for households, we propose an energy management framework, namely Edge-based energy management System (EEMS), to be used with the Edge-based system that was designed in the first stage of our research. The EEMS is a low infrastructure investment system. A small-scale solar energy harvesting system has also been integrated into this system. The non-intrusive load monitoring (NILM) algorithm has been implemented in appliances monitoring. Regarding to energy management function, the scheduling strategy can also conform to user preference. We have conducted a realistic experiment with several smart appliances and Raspberry Pi. The experiment resulted in the electricity tariff being reduced by 82.3%. The last part of research addresses demand response (DR) technology. With the development of DR, energy management systems such as EEMS are better able to be implemented. We propose the use of an electricity business trading model, integrated with user-side demand response resources. The business trading model can be adopted to manage risks, increase profit and improve user satisfaction. Users will also benefit from tariffs reduction with the use of this model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Shirin, Abkenar Forough. „Towards Hyper-efficient IoT Networks Using Fog Paradigm“. Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/28951.

Der volle Inhalt der Quelle
Annotation:
Fog computing was emerged as a treasured paradigm to improve the efficiency of the typical cloud of things (CoT) architecture of the Internet of Things (IoT) networks. Contrasting to the CoT in which the resource-rich high-performance data centers (DCs) are located far from the energy-constrained terminal nodes (TNs), fog nodes (FNs) in fog-enabled architecture provide computing resources in the proximity of the TNs. Therefore, the TNs consume less energy to offload their generated tasks to the FNs rather than the cloud DCs. Moreover, shortening the distance between the TNs and the FNs results in alleviating the transmission latency for the delay-sensitive tasks generated by the TNs. This is more significant for specific applications, such as smart healthcare, search and rescue, and disaster management, wherein making a prompt decision is vital to save lives. However, Fog-IoT networks still suffer from challenges regarding energy efficiency and provisioning quality of service (QoS) requirements, especially in terms of delay and throughput. The motivation behind this thesis is to tackle the corresponding challenges and improve the performance of the Fog-IoT networks. To this end, novel optimization problems, models, methods, and algorithms are proposed that mainly focus on the energy efficiency improvement and QoS provisioning in Fog-IoT networks. Moreover, due to the importance of the mobility of FNs, the contributions of the thesis encompass improving the performance of Fog-IoT networks with respect to both fixed and mobile FNs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ozeer, Umar Ibn Zaid. „Autonomic resilience of distributed IoT applications in the Fog“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM054.

Der volle Inhalt der Quelle
Annotation:
Les dernières tendances de l'informatique distribuées préconisent le Fog computing quiétend les capacités du Cloud en bordure du réseau, à proximité des objets terminaux etdes utilisateurs finaux localisé dans le monde physique. Le Fog est un catalyseur clé desapplications de l'Internet des Objets (IoT), car il résout certains des besoins que le Cloud neparvient à satisfaire, tels que les faibles latences, la confidentialité des données sensibles, laqualité de service ainsi que les contraintes géographiques. Pour cette raison, le Fog devientde plus en plus populaire et trouve des cas d'utilisations dans de nombreux domaines telsque la domotique, l'agriculture, la e-santé, les voitures autonomes, etc.Le Fog, cependant, est instable car il est constitué de milliards d'objets hétérogènes ausein d'un écosystème dynamique. Les objets de l'IoT tombent en pannes régulièrementparce qu'ils sont produits en masse à des couts très bas. De plus, l'écosystème Fog-IoTest cyber-physique et les objets IoT sont donc affectés par les conditions météorologiquesdu monde physique. Ceci accroît la probabilité et la fréquence des défaillances. Dansun tel écosystème, les défaillances produisent des comportements incohérents qui peuventprovoquer des situations dangereuses et coûteuses dans le monde physique. La gestion dela résilience dans un tel écosystème est donc primordiale.Cette Thèse propose une approche autonome de gestion de la résilience des applicationsIoT déployées en environnement Fog. L'approche proposée comprend quatre tâches fonctionnelles:(i) sauvegarde d'état, (ii) surveillance, (iii) notification des défaillances, et(iv) reprise sur panne. Chaque tâche est un regroupement de rôles similaires et est miseen oeuvre en tenant compte les spécificités de l'écosystème (e.g., hétérogénéité, ressourceslimitées). La sauvegarde d'état vise à sauvegarder les informations sur l'état de l'application.Ces informations sont constituées des données d'exécution et de la mémoire volatile, ainsique des messages échangés et fonctions exécutées par l'application. La surveillance vise àobserver et à communiquer des informations sur le cycle de vie de l'application.Il est particulièrement utile pour la détection des défaillances. Lors d'une défaillance, des notificationssont propagées à la partie de l'application affectée par cette défaillance. La propagationdes notifications vise à limiter la portée de l'impact de la défaillance et à fournir un servicepartiel ou dégradé. Pour établir une reprise sur panne, l'application est reconfigurée et lesdonnées enregistrées lors de la tâche de sauvegarde d'état sont utilisées afin de restaurer unétat cohérent de l'application par rapport au monde physique. Cette réconciliation entrel'état de l'application et celui du monde physique est appelé cohérence cyber-physique. Laprocédure de reprise sur panne en assurant la cohérence cyber-physique évite les impactsdangereux et coûteux de la défaillance sur le monde physique.L'approche proposée a été validée à l'aide de techniques de vérification par modèle afin devérifier que certaines propriétés importantes sont satisfaites. Cette approche de résilience aété mise en oeuvre sous la forme d'une boîte à outils, F3ARIoT, destiné aux développeurs.F3ARIoT a été évalué sur une application domotique. Les résultats montrent la faisabilité de son utilisation sur des déploiements réels d'applications Fog-IoT, ainsi que desperformances satisfaisantes par rapport aux utilisateurs
Recent computing trends have been advocating for more distributed paradigms, namelyFog computing, which extends the capacities of the Cloud at the edge of the network, thatis close to end devices and end users in the physical world. The Fog is a key enabler of theInternet of Things (IoT) applications as it resolves some of the needs that the Cloud failsto provide such as low network latencies, privacy, QoS, and geographical requirements. Forthis reason, the Fog has become increasingly popular and finds application in many fieldssuch as smart homes and cities, agriculture, healthcare, transportation, etc.The Fog, however, is unstable because it is constituted of billions of heterogeneous devicesin a dynamic ecosystem. IoT devices may regularly fail because of bulk production andcheap design. Moreover, the Fog-IoT ecosystem is cyber-physical and thus devices aresubjected to external physical world conditions which increase the occurrence of failures.When failures occur in such an ecosystem, the resulting inconsistencies in the applicationaffect the physical world by inducing hazardous and costly situations.In this Thesis, we propose an end-to-end autonomic failure management approach for IoTapplications deployed in the Fog. The approach manages IoT applications and is composedof four functional steps: (i) state saving, (ii) monitoring, (iii) failure notification,and (iv) recovery. Each step is a collection of similar roles and is implemented, taking intoaccount the specificities of the ecosystem (e.g., heterogeneity, resource limitations). Statesaving aims at saving data concerning the state of the managed application. These includeruntime parameters and the data in the volatile memory, as well as messages exchangedand functions executed by the application. Monitoring aims at observing and reportinginformation on the lifecycle of the application. When a failure is detected, failure notificationsare propagated to the part of the application which is affected by that failure.The propagation of failure notifications aims at limiting the impact of the failure and providinga partial service. In order to recover from a failure, the application is reconfigured and thedata saved during the state saving step are used to restore a cyber-physical consistent stateof the application. Cyber-physical consistency aims at maintaining a consistent behaviourof the application with respect to the physical world, as well as avoiding dangerous andcostly circumstances.The approach was validated using model checking techniques to verify important correctnessproperties. It was then implemented as a framework called F3ARIoT. This frameworkwas evaluated on a smart home application. The results showed the feasibility of deployingF3ARIoT on real Fog-IoT applications as well as its good performances in regards to enduser experience
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Samie, Farzad [Verfasser], und J. [Akademischer Betreuer] Henkel. „Resource Management for Edge Computing in Internet of Things (IoT) / Farzad Samie ; Betreuer: J. Henkel“. Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1154856690/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Mestoukirdi, Mohamad. „Reliable and Communication-Efficient Federated Learning for Future Intelligent Edge Networks“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS432.

Der volle Inhalt der Quelle
Annotation:
Dans le domaine des futurs réseaux sans fil 6G, l'intégration de la périphérie intelligente grâce à l'avènement de l'IA représente un bond en avant considérable, promettant des avancées révolutionnaires en matière de communication sans fil. Cette intégration favorise une synergie harmonieuse, capitalisant sur le potentiel collectif de ces technologies transformatrices. Au cœur de cette intégration se trouve le rôle de l'apprentissage fédéré, un paradigme d'apprentissage décentralisé qui préserve la confidentialité des données tout en exploitant l'intelligence collective des appareils interconnectés. Dans la première partie de la thèse, nous nous attaquons au problème de l'hétérogénéité statistique dans l'apprentissage fédéré, qui découle des distributions de données divergentes entre les ensembles de données des dispositifs. Plutôt que d'entraîner un modèle unique conventionnel, qui donne souvent de mauvais résultats avec des données non identifiées, nous proposons un ensemble de règles centrées sur l'utilisateur qui produisent des modèles personnalisés adaptés aux objectifs de chaque utilisateur. Pour atténuer la surcharge de communication prohibitive associée à l'apprentissage d'un modèle personnalisé distinct pour chaque utilisateur, les utilisateurs sont répartis en groupes sur la base de la similarité de leurs objectifs. Cela permet l'apprentissage collectif de modèles personnalisés spécifiques à la cohorte. En conséquence, le nombre total de modèles personnalisés formés est réduit. Cette réduction diminue la consommation de ressources sans fil nécessaires à la transmission des mises à jour de modèles sur des canaux sans fil à bande passante limitée. Dans la deuxième partie, nous nous concentrons sur l'intégration des dispositifs à distance de l'IdO dans la périphérie intelligente en exploitant les véhicules aériens sans pilote en tant qu'orchestrateur d'apprentissage fédéré. Alors que des études antérieures ont largement exploré le potentiel des drones en tant que stations de base volantes ou relais dans les réseaux sans fil, leur utilisation pour faciliter l'apprentissage de modèles est encore un domaine de recherche relativement nouveau. Dans ce contexte, nous tirons parti de la mobilité des drones pour contourner les conditions de canal défavorables dans les zones rurales et établir des terrains d'apprentissage pour les dispositifs IoT distants. Cependant, les déploiements de drones posent des défis en termes de planification et de conception de trajectoires. À cette fin, une optimisation conjointe de la trajectoire du drone, de l'ordonnancement du dispositif et de la performance d'apprentissage est formulée et résolue à l'aide de techniques d'optimisation convexe et de la théorie des graphes. Dans la troisième partie de cette thèse, nous jetons un regard critique sur la surcharge de communication imposée par l'apprentissage fédéré sur les réseaux sans fil. Bien que les techniques de compression telles que la quantification et la sparsification des mises à jour de modèles soient largement utilisées, elles permettent souvent d'obtenir une efficacité de communication au prix d'une réduction de la performance du modèle. Pour surmonter cette limitation, nous utilisons des réseaux aléatoires sur-paramétrés pour approximer les réseaux cibles par l'élagage des paramètres plutôt que par l'optimisation directe. Il a été démontré que cette approche ne nécessite pas la transmission de plus d'un seul bit d'information par paramètre du modèle. Nous montrons que les méthodes SoTA ne parviennent pas à tirer parti de tous les avantages possibles en termes d'efficacité de la communication en utilisant cette approche. Nous proposons une fonction de perte régularisée qui prend en compte l'entropie des mises à jour transmises, ce qui se traduit par des améliorations notables de l'efficacité de la communication et de la mémoire lors de l'apprentissage fédéré sur des dispositifs périphériques, sans sacrifier la précision
In the realm of future 6G wireless networks, integrating the intelligent edge through the advent of AI signifies a momentous leap forward, promising revolutionary advancements in wireless communication. This integration fosters a harmonious synergy, capitalizing on the collective potential of these transformative technologies. Central to this integration is the role of federated learning, a decentralized learning paradigm that upholds data privacy while harnessing the collective intelligence of interconnected devices. By embracing federated learning, 6G networks can unlock a myriad of benefits for both wireless networks and edge devices. On one hand, wireless networks gain the ability to exploit data-driven solutions, surpassing the limitations of traditional model-driven approaches. Particularly, leveraging real-time data insights will empower 6G networks to adapt, optimize performance, and enhance network efficiency dynamically. On the other hand, edge devices benefit from personalized experiences and tailored solutions, catered to their specific requirements. Specifically, edge devices will experience improved performance and reduced latency through localized decision-making, real-time processing, and reduced reliance on centralized infrastructure. In the first part of the thesis, we tackle the predicament of statistical heterogeneity in federated learning stemming from divergent data distributions among devices datasets. Rather than training a conventional one-model-fits-all, which often performs poorly with non-IID data, we propose user-centric set of rules that produce personalized models tailored to each user objectives. To mitigate the prohibitive communication overhead associated with training distinct personalized model for each user, users are partitioned into clusters based on their objectives similarity. This enables collective training of cohort-specific personalized models. As a result, the total number of personalized models trained is reduced. This reduction lessens the consumption of wireless resources required to transmit model updates across bandwidth-limited wireless channels. In the second part, our focus shifts towards integrating IoT remote devices into the intelligent edge by leveraging unmanned aerial vehicles as a federated learning orchestrator. While previous studies have extensively explored the potential of UAVs as flying base stations or relays in wireless networks, their utilization in facilitating model training is still a relatively new area of research. In this context, we leverage the UAV mobility to bypass the unfavorable channel conditions in rural areas and establish learning grounds to remote IoT devices. However, UAV deployments poses challenges in terms of scheduling and trajectory design. To this end, a joint optimization of UAV trajectory, device scheduling, and the learning performance is formulated and solved using convex optimization techniques and graph theory. In the third and final part of this thesis, we take a critical look at thecommunication overhead imposed by federated learning on wireless networks. While compression techniques such as quantization and sparsification of model updates are widely used, they often achieve communication efficiency at the cost of reduced model performance. We employ over-parameterized random networks to approximate target networks through parameter pruning rather than direct optimization to overcome this limitation. This approach has been demonstrated to require transmitting no more than a single bit of information per model parameter. We show that SoTA methods fail to capitalize on the full attainable advantages in terms of communication efficiency using this approach. Accordingly, we propose a regularized loss function which considers the entropy of transmitted updates, resulting in notable improvements to communication and memory efficiency during federated training on edge devices without sacrificing accuracy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Perala, Sai Saketh Nandan. „Efficient Resource Management for Video Applications in the Era of Internet-of-Things (IoT)“. OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2311.

Der volle Inhalt der Quelle
Annotation:
The Internet-of-Things (IoT) is a network of interconnected devices with sensing, monitoring and processing functionalities that work in a cooperative way to offer services. Smart buildings, self-driving cars, house monitoring and management, city electricity and pollution monitoring are some examples where IoT systems have been already deployed. Amongst different kinds of devices in IoT, cameras play a vital role, since they can capture rich and resourceful content. However, since multiple IoT devices share the same gateway, the data that is produced from high definition cameras congest the network and deplete the available computational resources resulting in Quality-of-Service degradation corresponding to the visual content. In this thesis, we present an edge-based resource management framework for serving video processing applications in an Internet-of-Things (IoT) environment. In order to support the computational demands of latency-sensitive video applications and utilize effectively the available network resources, we employ edge-based resource management policy. We evaluate our proposed framework with a face recognition use case.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Tania, Zannatun Nayem. „Machine Learning with Reconfigurable Privacy on Resource-Limited Edge Computing Devices“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292105.

Der volle Inhalt der Quelle
Annotation:
Distributed computing allows effective data storage, processing and retrieval but it poses security and privacy issues. Sensors are the cornerstone of the IoT-based pipelines, since they constantly capture data until it can be analyzed at the central cloud resources. However, these sensor nodes are often constrained by limited resources. Ideally, it is desired to make all the collected data features private but due to resource limitations, it may not always be possible. Making all the features private may cause overutilization of resources, which would in turn affect the performance of the whole system. In this thesis, we design and implement a system that is capable of finding the optimal set of data features to make private, given the device’s maximum resource constraints and the desired performance or accuracy of the system. Using the generalization techniques for data anonymization, we create user-defined injective privacy encoder functions to make each feature of the dataset private. Regardless of the resource availability, some data features are defined by the user as essential features to make private. All other data features that may pose privacy threat are termed as the non-essential features. We propose Dynamic Iterative Greedy Search (DIGS), a greedy search algorithm that takes the resource consumption for each non-essential feature as input and returns the most optimal set of non-essential features that can be private given the available resources. The most optimal set contains the features which consume the least resources. We evaluate our system on a Fitbit dataset containing 17 data features, 4 of which are essential private features for a given classification application. Our results show that we can provide 9 additional private features apart from the 4 essential features of the Fitbit dataset containing 1663 records. Furthermore, we can save 26:21% memory as compared to making all the features private. We also test our method on a larger dataset generated with Generative Adversarial Network (GAN). However, the chosen edge device, Raspberry Pi, is unable to cater to the scale of the large dataset due to insufficient resources. Our evaluations using 1=8th of the GAN dataset result in 3 extra private features with up to 62:74% memory savings as compared to all private data features. Maintaining privacy not only requires additional resources, but also has consequences on the performance of the designed applications. However, we discover that privacy encoding has a positive impact on the accuracy of the classification model for our chosen classification application.
Distribuerad databehandling möjliggör effektiv datalagring, bearbetning och hämtning men det medför säkerhets- och sekretessproblem. Sensorer är hörnstenen i de IoT-baserade rörledningarna, eftersom de ständigt samlar in data tills de kan analyseras på de centrala molnresurserna. Dessa sensornoder begränsas dock ofta av begränsade resurser. Helst är det önskvärt att göra alla insamlade datafunktioner privata, men på grund av resursbegränsningar kanske det inte alltid är möjligt. Att göra alla funktioner privata kan orsaka överutnyttjande av resurser, vilket i sin tur skulle påverka prestanda för hela systemet. I denna avhandling designar och implementerar vi ett system som kan hitta den optimala uppsättningen datafunktioner för att göra privata, med tanke på begränsningar av enhetsresurserna och systemets önskade prestanda eller noggrannhet. Med hjälp av generaliseringsteknikerna för data-anonymisering skapar vi användardefinierade injicerbara sekretess-kodningsfunktioner för att göra varje funktion i datasetet privat. Oavsett resurstillgänglighet definieras vissa datafunktioner av användaren som viktiga funktioner för att göra privat. Alla andra datafunktioner som kan utgöra ett integritetshot kallas de icke-väsentliga funktionerna. Vi föreslår Dynamic Iterative Greedy Search (DIGS), en girig sökalgoritm som tar resursförbrukningen för varje icke-väsentlig funktion som inmatning och ger den mest optimala uppsättningen icke-väsentliga funktioner som kan vara privata med tanke på tillgängliga resurser. Den mest optimala uppsättningen innehåller de funktioner som förbrukar minst resurser. Vi utvärderar vårt system på en Fitbit-dataset som innehåller 17 datafunktioner, varav 4 är viktiga privata funktioner för en viss klassificeringsapplikation. Våra resultat visar att vi kan erbjuda ytterligare 9 privata funktioner förutom de 4 viktiga funktionerna i Fitbit-datasetet som innehåller 1663 poster. Dessutom kan vi spara 26; 21% minne jämfört med att göra alla funktioner privata. Vi testar också vår metod på en större dataset som genereras med Generative Adversarial Network (GAN). Den valda kantenheten, Raspberry Pi, kan dock inte tillgodose storleken på den stora datasetet på grund av otillräckliga resurser. Våra utvärderingar med 1=8th av GAN-datasetet resulterar i 3 extra privata funktioner med upp till 62; 74% minnesbesparingar jämfört med alla privata datafunktioner. Att upprätthålla integritet kräver inte bara ytterligare resurser utan har också konsekvenser för de designade applikationernas prestanda. Vi upptäcker dock att integritetskodning har en positiv inverkan på noggrannheten i klassificeringsmodellen för vår valda klassificeringsapplikation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Sigwele, Tshiamo, Yim Fun Hu, M. Ali, Jiachen Hou, M. Susanto und H. Fitriawan. „An intelligent edge computing based semantic gateway for healthcare systems interoperability and collaboration“. IEEE, 2018. http://hdl.handle.net/10454/17552.

Der volle Inhalt der Quelle
Annotation:
Yes
The use of Information and Communications Technology (ICTs) in healthcare has the potential of minimizing medical errors, reducing healthcare cost and improving collaboration between healthcare systems which can dramatically improve the healthcare service quality. However interoperability within different healthcare systems (clinics/hospitals/pharmacies) remains an issue of further research due to a lack of collaboration and exchange of healthcare information. To solve this problem, cross healthcare system collaboration is required. This paper proposes a conceptual semantic based healthcare collaboration framework based on Internet of Things (IoT) infrastructure that is able to offer a secure cross system information and knowledge exchange between different healthcare systems seamlessly that is readable by both machines and humans. In the proposed framework, an intelligent semantic gateway is introduced where a web application with restful Application Programming Interface (API) is used to expose the healthcare information of each system for collaboration. A case study that exposed the patient's data between two different healthcare systems was practically demonstrated where a pharmacist can access the patient's electronic prescription from the clinic.
British Council Institutional Links grant under the BEIS-managed Newton Fund.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Monducci, Francesca. „Infrastruttura Edge-based per Sistemi Predittivi in Ambito Industriale“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24054/.

Der volle Inhalt der Quelle
Annotation:
Il continuo evolversi della tecnologia ha portato nel corso della storia a diverse rivoluzioni industriali, la cui ultima è la così detta Industria 4.0. Con Industria 4.0 vengono integrate alcune nuove tecnologie mirate al miglioramento delle condizioni di lavoro, alla creazione di nuovi modelli di business e un generale aumento della produttività e della qualità. Alla base di questa rivoluzione industriale si trova l'IoT, che permette la raccolta di grandi quantità di dati, come ad esempio i comportamenti delle macchine industriali. Passando all'elaborazione di questi dati è possibile creare sistemi predittivi, ossia sistemi in grado di predire, in base ai dati storici, i comportamenti futuri, basandosi sui dati correnti. La creazione di questi sistemi richiede una certa potenza di calcolo, offerta dal Cloud. Tipicamente, però, il Cloud non si trova in prossimità della fonte dei dati, perciò l'utilizzo di questi sistemi predittivi sul Cloud porterebbe a latenze e costi elevati, diminuendo quindi l'efficienza del processo. Qui entra in gioco l'Edge, ossia un nodo con una potenza di calcolo inferiore al Cloud, ma che è in grado di eseguire le predizioni. Inoltre, l'Edge si trova in prossimità della fonte dei dati, diminuendo quindi latenze e costi. Esistono diverse tecnologie che permettono l'implementazione di tali processi, tra i quali ioFog. Questa tesi tratta dell'assessment di ioFog per questo scopo, oltre che l'esplorazione e l'utilizzo di tante altre famose tecnologie quali Docker, TensorFlow, Spring Boot e MongoDB.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Vargas, Vargas Fernando. „Cloudlet for the Internet-of- Things“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191433.

Der volle Inhalt der Quelle
Annotation:
With an increasing number of people currently living in urban areas, many cities around the globe are faced with issues such as increased pollution and traffic congestion. In an effort to tackle such challenges, governments and city councils are formulating new and innovative strategies. The integration of ICT with these strategies creates the concept of smart cities. The Internet of Things (IoT) is a key driver for smart city initiatives, making it necessary to have an IT infrastructure that can take advantage of the many benefits that IoT can provide. The Cloudlet is a new infrastructure model that offers cloud-computing capabilities at the edge of the mobile network. This environment is characterized by low latency and high bandwidth, constituting a novel ecosystem where network operators can open their network edge to third parties, allowing them to flexibly and rapidly deploy innovative applications and services towards mobile subscribers. In this thesis, we present a cloudlet architecture that leverages edge computing to provide a platform for IoT devices on top of which many smart city applications can be deployed. We first provide an overview of existing challenges and requirements in IoT systems development. Next, we analyse existing cloudlet solutions. Finally, we present our cloudlet architecture for IoT, including design and a prototype solution. For our cloudlet prototype, we focused on a micro-scale emission model to calculate the CO2 emissions per individual trip of a vehicle, and implemented the functionality that allows us to read CO2 data from CO2 sensors. The location data is obtained from an Android smartphone and is processed in the cloudlet. Finally, we conclude with a performance evaluation.
Med en befolkning som ökar i urbana områden, står många av världens städer inför utmaningar som ökande avgaser och trafikstockning. I ett försök att tackla sådana utmaningar, formulerar regeringar och stadsfullmäktige nya och innovativa strategier. Integrationen av ICT med dessa strategier bildar konceptet smart cities. Internet of Things (IoT) är en drivande faktor för smart city initiativ, vilket gör det nödvändigt för en IT infrastruktur som kan ta till vara på de många fördelar som IoT bidrar med. Cloudlet är en ny infrastrukturell modell som erbjuder datormolnskompetens i mobilnätverkets edge. Denna miljö karakteriseras av låg latens och hög bandbredd, utgörande ett nytt ekosystem där nätverksoperatörer kan hålla deras nätverks-edge öppet för utomstående, vilket tillåter att flexibelt och snabbt utveckla innovativa applikationer och tjänster för mobila subskribenter. I denna avhandling presenterar vi en cloudlet-arkitektur som framhäver edge computing, för att förse en plattform för IoT utrustning där många smart city applikationer kan utvecklas. Vi förser er först med en överblick av existerande utmaningar och krav i IoT systemutveckling. Sedan analyserar vi existerande cloudlet lösningar. Slutligen presenteras vår cloudlet arkitektur för IoT, inklusive design och en prototyplösning. För vår cloudlet-prototyp har vi fokuserat på en modell av mikroskala för att räkna ut CO2 emissioner per enskild resa med fordon, och implementerat en funktion som tillåter oss att läsa CO2 data från CO2 sensorer. Platsdata är inhämtad med hjälp av en Android smartphone och behandlas i cloudlet. Det hela sammanfattas med en prestandaevaluering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Vadivelu, Somasundaram. „Sensor data computation in a heavy vehicle environment : An Edge computation approach“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235486.

Der volle Inhalt der Quelle
Annotation:
In a heavy vehicle, internet connection is not reliable, primarily because the truck often travels to a remote location where network might not be available. The data generated from the sensors in a vehicle might not be sent to the internet when the connection is poor and hence it would be appropriate to store and do some basic computation on those data in the heavy vehicle itself and send it to the cloud when there is a good network connection. The process of doing computation near the place where data is generated is called Edge computing. Scania has its own Edge computation solution, which it uses for doing computations like preprocessing of sensor data, storing data etc. Scania’s solution is compared with a commercial edge computing platform called as AWS (Amazon Web Service’s) Greengrass. The comparison was in terms of Data efficiency, CPU load, and memory footprint. In the conclusion it is shown that Greengrass solution works better than the current Scania solution in terms of CPU load and memory footprint, while in data efficiency even though Scania solution is more efficient compared to Greengrass solution, it was shown that as the truck advances in terms of increasing data size the Greengrass solution might prove competitive to the Scania solution.One more topic that is explored in this thesis is Digital twin. Digital twin is the virtual form of any physical entity, it can be formed by obtaining real-time sensor values that are attached to the physical device. With the help of sensor values, a system with an approximate state of the device can be framed and which can then act as the digital twin. Digital twin can be considered as an important use case of edge computing. The digital twin is realized with the help of AWS Device shadow.
I ett tungt fordonsscenario är internetanslutningen inte tillförlitlig, främst eftersom lastbilen ofta reser på avlägsna platser nätverket kanske inte är tillgängligt. Data som genereras av sensorer kan inte skickas till internet när anslutningen är dålig och det är därför bra att ackumulera och göra en viss grundläggande beräkning av data i det tunga fordonet och skicka det till molnet när det finns en bra nätverksanslutning. Processen att göra beräkning nära den plats där data genereras kallas Edge computing. Scania har sin egen Edge Computing-lösning, som den använder för att göra beräkningar som förbehandling av sensordata, lagring av data etc. Jämförelsen skulle vara vad gäller data efficiency, CPU load och memory consumption. I slutsatsen visar det sig att Greengrass-lösningen fungerar bättre än den nuvarande Scania-lösningen när det gäller CPU-belastning och minnesfotavtryck, medan det i data-effektivitet trots att Scania-lösningen är effektivare jämfört med Greengrass-lösningen visades att när lastbilen går vidare i Villkor för att öka datastorleken kan Greengrass-lösningen vara konkurrenskraftig för Scania-lösningen. För att realisera Edge computing används en mjukvara som heter Amazon Web Service (AWS) Greengrass.Ett annat ämne som utforskas i denna avhandling är digital twin. Digital twin är den virtuella formen av någon fysisk enhet, den kan bildas genom att erhålla realtidssensorvärden som är anslutna till den fysiska enheten. Med hjälp av sensorns värden kan ett system med ungefärligt tillstånd av enheten inramas och som sedan kan fungera som digital twin. Digital twin kan betraktas som ett viktigt användningsfall vid kantkalkylering. Den digital twin realiseras med hjälp av AWS Device Shadow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Longo, Eugenio. „AI e IoT: contesto e stato dell’arte“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Den vollen Inhalt der Quelle finden
Annotation:
L’intelligenza artificiale costituisce un ramo dell’informatica che permette di programmare e progettare sistemi in grado di dotare le macchine di caratteristiche considerate tipicamente umane. L’internet delle cose rappresenta una rete di oggetti fisici in grado di connettersi e scambiare dati con altri dispositivi tramite internet. Nonostante L’internet delle cose e l’intelligenza artificiale rappresentino due concetti diversi, riescono ad integrarsi per creare nuove soluzioni con un elevato potenziale. L’utilizzo combinato di queste due tecnologie permette di aumentare il valore di entrambe le soluzioni, in quanto permette il conseguimento di dati e modelli predittivi. In questo elaborato di tesi l’obiettivo è stato quello di analizzare l’evoluzione dell’intelligenza artificiale e dell’innovazione tecnologica nella vita quotidiana. Nello specifico, l’intelligenza artificiale applicata all’internet delle cose ha preso piede nella gestione di grandi realtà come le smart city o smart mobility o nelle piccole realtà come le smart home, mettendo in rete una grande quantità di dati privati. Tuttavia, esistono ancora delle problematiche. Infatti, ad oggi non è stato ancora raggiunto il livello di sicurezza tale da poter utilizzare queste tecnologie in applicazioni più critiche. La sfida più grande nel mondo del lavoro sarà comprendere e saper sfruttare le potenzialità che il nuovo paradigma nell’utilizzo dell’intelligenza artificiale andrà a suggerire.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Sirigu, Giovanni. „Progettazione di Gateway Edge per Smart Factory“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17589/.

Der volle Inhalt der Quelle
Annotation:
Il presente lavoro di tesi riguarda la progettazione e realizzazione di un Gateway Edge programmabile per la Smart Factory che consenta il monitoraggio di macchine industriali, con lo scopo di integrarle con i sistemi informativi aziendali. Il sistema sviluppato presenta come nucleo il Raspberry-Pi model 3 B+ al quale vengono integrate una serie di interfacce di comunicazione wireless e cablate ed interfacce I/O sia di tipo analogico che digitale. Il lavoro ha avuto inizio con lo studio del mondo IoT e del mondo Smart Factory il quale ha portato alla progettazione hardware del Gateway Edge. Si è quindi sviluppato un circuito stampato da integrare al Raspberry-Pi e si sono conseguentemente sviluppate librerie in Python per l’interfacciamento con le periferiche.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Rahman, Hasibur. „Distributed Intelligence-Assisted Autonomic Context-Information Management : A context-based approach to handling vast amounts of heterogeneous IoT data“. Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-149513.

Der volle Inhalt der Quelle
Annotation:
As an implication of rapid growth in Internet-of-Things (IoT) data, current focus has shifted towards utilizing and analysing the data in order to make sense of the data. The aim of which is to make instantaneous, automated, and informed decisions that will drive the future IoT. This corresponds to extracting and applying knowledge from IoT data which brings both a substantial challenge and high value. Context plays an important role in reaping value from data, and is capable of countering the IoT data challenges. The management of heterogeneous contextualized data is infeasible and insufficient with the existing solutions which mandates new solutions. Research until now has mostly concentrated on providing cloud-based IoT solutions; among other issues, this promotes real-time and faster decision-making issues. In view of this, this dissertation undertakes a study of a context-based approach entitled Distributed intelligence-assisted Autonomic Context Information Management (DACIM), the purpose of which is to efficiently (i) utilize and (ii) analyse IoT data. To address the challenges and solutions with respect to enabling DACIM, the dissertation starts with proposing a logical-clustering approach for proper IoT data utilization. The environment that the number of Things immerse changes rapidly and becomes dynamic. To this end, self-organization has been supported by proposing self-* algorithms that resulted in 10 organized Things per second and high accuracy rate for Things joining. IoT contextualized data further requires scalable dissemination which has been addressed by a Publish/Subscribe model, and it has been shown that high publication rate and faster subscription matching are realisable. The dissertation ends with the proposal of a new approach which assists distribution of intelligence with regard to analysing context information to alleviate intelligence of things. The approach allows to bring few of the application of knowledge from the cloud to the edge; where edge based solution has been facilitated with intelligence that enables faster responses and reduced dependency on the rules by leveraging artificial intelligence techniques. To infer knowledge for different IoT applications closer to the Things, a multi-modal reasoner has been proposed which demonstrates faster response. The evaluations of the designed and developed DACIM gives promising results, which are distributed over seven publications; from this, it can be concluded that it is feasible to realize a distributed intelligence-assisted context-based approach that contribute towards autonomic context information management in the ever-expanding IoT realm.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 7: Submitted.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Passeri, Luca. „Pervasive Jarvis: Evoluzione di un Sistema IoT per le Smart Home“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Den vollen Inhalt der Quelle finden
Annotation:
L'enorme crescita dell'Internet of Things ha permesso una trasformazione degli ambienti in cui viviamo tramite la diffusione di dispositivi sempre più intelligenti, in grado di interagire tra loro e con il mondo esterno. In questo scenario si introduce Jarvis, un assistente virtuale per case ed uffici. Jarvis integra diversi prodotti e servizi, mettendo a disposizione dell'utente numerose funzionalità che permettono di automatizzare un'abitazione e controllarla da remoto. Questo progetto si propone di evolvere Jarvis rendendolo conforme allo standard attuale dei sistemi IoT per le smart home, in particolare tenendo conto di concetti come “pervasive computing” e “Web of Things”. Lo scopo principale è quello di trasformare Jarvis in modo da aumentarne significativamente le capacità. In particolare si intende rendere tale assistente più pervasivo, in modo che sia possibile interagirvi in ogni ambiente, e permettere una maggior apertura verso il mondo esterno, consentendo di interagire con l'enorme quantità di dispositivi e servizi nel Web.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Harmassi, Mariem. „Thing-to-thing context-awareness at the edge“. Thesis, La Rochelle, 2019. http://www.theses.fr/2019LAROS037.

Der volle Inhalt der Quelle
Annotation:
L'Internet des objets (IdO) comprend aujourd'hui une riche offre d'objets connectés, qui permettent de collecter et de partager en continu des données hétérogènes se rapportant à leurs environnements. Ceci a permis l'émergence d'un nouveau type d'applications, qui sont basées sur ces données et permettent de faciliter la vie des citoyens. Ces applications de l'Internet des objets sont dites « sensibles au contexte ». Grâce aux données collectées sur le contexte de l'utilisateur, elles sont en mesure d'adapter leur comportement de manière autonome, sans intervention humaine. Dans cette thèse, nous proposons un nouveau paradigme autour des interactions objet-à-objet, nommé « Interactions objet-à-objet pour la sensibilité au contexte en bordure de réseaux ». Ce dernier, permet de tenir compte d'un nouveau type de contexte, paradoxalement à la notion conventionnelle de « sensibilité au contexte » qui se limite au contexte de l’utilisateur d’une application. Ainsi nous proposons de nous intéresser pour la première fois au contexte des objets en tant que composante même de l’application. Cette thèse vise à doter les objets connectés d’un certain degré d'intelligence, leur permettant de comprendre leur propre environnement et d’en tenir compte dans leurs interactions objet-à-objet. Les contributions majeures de cette thèse se focalisent sur deux modules principaux. Nous proposons, dans un premier temps, un module d’identification de contextes capable de capter les contextes des objets mobiles et de délivrer ce genre d’information de contexte de façon exacte et continue. Sur la base de cette information de contexte assurée par le premier module, nous proposons un deuxième module de collecte de données sensible aux contextes de déploiement des objets connectés. Afin que ceci soit possible, de nombreux verrous restent à lever. Concernant le premier module d’identification de contexte, le premier défi rencontré afin de permettre aux objets connectés de devenir sensibles au contexte est (i) Comment peut-on assurer une identification de contexte exacte pour des objets déployés dans des environnements incontrôlables ? Pour ce faire, nous proposons dans notre premier travail un raisonneur dédié à l'apprentissage et le raisonnement sur le contexte [1]. Le raisonneur proposé est fondé sur une stratégie coopérative entre les différents dispositifs IdO d'un même voisinage. Cette coopération vise à un échange mutuel des données parmi les ressources disponibles d'un même voisinage. La deuxième problématique rencontrée est (ii) Comment peut-on assurer une identification de contexte continue pour des nœuds mobiles appartenant à des réseaux opportunistes ? Nous devons tout d'abord leur permettre de découvrir un maximum de voisins afin d'établir un échange avec. Afin de répondre à cette deuxième problématique nous proposons WELCOME un protocole de découverte des voisinages éco énergétique et à faible latence [2] qui permettra de diminuer considérablement les collisions sur la base d’une découverte de voisinage à faible coût en termes de latence et d’énergie. La troisième problématique, se rapportant au module de collecte de données sensible au contexte, est (iii) Comment peut-on assurer une collecte efficace et précise sur la base du contexte physique de déploiement des capteurs. En effet, d’une part tenir compte de l’information de contexte des capteurs, permet d'éviter toutes transmissions inutiles ou redondante de données. D’autre part, la contextualisation des données implique un partage et donc des transmissions de messages. La question ici (iii) Comment peut-on contextualiser au mieux le plus grand nombre d'objets connectés tout en préservant au mieux leurs ressources énergétiques. Afin de répondre à cette question, nous proposons un Publish-Subscribe à la fois sensible au contexte et éco énergétique basé sur un jeu coalitionnel dynamique qui permet de résoudre ces conflits d’intérêts entre les sources dans un réseau
Internet of Things IoT (IoT) today comprises a plethora of different sensors and diverse connected objects, constantly collecting and sharing heterogeneous sensory data from their environment. This enables the emergence of new applications exploiting the collected data towards facilitating citizens lifestyle. These IoT applications are made context-aware thanks to data collected about user's context, to adapt their behavior autonomously without human intervention. In this Thesis, we propose a novel paradigm that concern Machine to Machine (M2M)/Thing To Thing (T2T) interactions to be aware of each other context named \T2T context-awareness at the edge", it brings conventional context-awareness from the application front end to the application back-end. More precisely, we propose to empower IoT devices with intelligence, allowing them to understand their environment and adapt their behaviors based on, and even act upon, the information captured by the neighboringdevices around, thus creating a collective intelligence. The first challenge we face in order to make IoT devices context-aware is (i) How can we extract such information without deploying any dedicated resources for this task? To do so we propose in our first work a context reasoner [1] based a cooperation among IoT devices located in the same surrounding. Such cooperation aims at mutually exchange data about each other context. To enable IoT devices to see, hear, and smell the physical world for themselves, we need firstly to make them connected to share their observations. For a mobile and energy- constrained device, the second challenge we face is (ii) How to discover as much neighbors as possible in its vicinity while preserving its energy resource? We propose Welcome [2] a Low latency and Energy efficient neighbor discovery scheme that is based on a single-delegate election method. Finally, a Publish-Subscribe that take into account the context at the edge of IoT devices, can greatly reduce the overhead and save the energy by avoiding unnecessary transmission of data that doesn't match application requirements. However, if not thought about properly building such T2T context-awareness could imply an overload of subscriptions to meet context-estimation needs. So our third contribution is (iii) How to make IoT devices context-aware while saving energy. To answer this, We propose an Energy efficient and context-aware Publish-Subscribe [3] that strike a balance between energy-consumption due to context estimation and energy-saving due to context-based filtering near to data sources
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Le, Xuan Sang. „Co-conception Logiciel/FPGA pour Edge-computing : promotion de la conception orientée objet“. Thesis, Brest, 2017. http://www.theses.fr/2017BRES0041/document.

Der volle Inhalt der Quelle
Annotation:
L’informatique en nuage (cloud computing) est souvent le modèle de calcul le plus référencé pour l’internet des objets (Internet of Things).Ce modèle adopte une architecture où toutes les données de capteur sont stockées et traitées de façon centralisée. Malgré de nombreux avantages, cette architecture souffre d’une faible évolutivité alors même que les données disponibles sur le réseau sont en constante augmentation. Il est à noter que, déjà actuellement, plus de50 % des connexions sur Internet sont inter objets. Cela peut engendrer un problème de fiabilité dans les applications temps réel. Le calcul en périphérie (Edge computing) qui est basé sur une architecture décentralisée, est connue comme une solution pour ce problème émergent en : (1) renforçant l’équipement au bord du réseau et (2) poussant le traitement des données vers le bord.Le calcul en périphérie nécessite des noeuds de capteurs dotés d’une plus grande capacité logicielle et d’une plus grande puissance de traitement, bien que contraints en consommation d’énergie. Les systèmes matériels hybrides constitués de FPGAs et de processeurs offrent un bon compromis pour cette exigence. Les FPGAs sont connus pour permettre des calculs exhibant un parallélisme spatial, aussi que pour leur rapidité, tout en respectant un budget énergétique limité. Coupler un processeur au FPGA pour former un noeud garantit de disposer d’un environnement logiciel flexible pour ce nœud.La conception d’applications pour ce type de systèmes hybrides (réseau/logiciel/matériel) reste toujours une tâche difficile. Elle couvre un vaste domaine d’expertise allant du logiciel de haut niveau au matériel de bas niveau (FPGA). Il en résulte un flux de conception de système complexe, qui implique l’utilisation d’outils issus de différents domaines d’ingénierie. Une solution commune est de proposer un environnement de conception hétérogène qui combine/intègre l’ensemble de ces outils. Cependant, l’hétérogénéité intrinsèque de cette approche peut compromettre la fiabilité du système lors des échanges de données entre les outils.L’objectif de ce travail est de proposer une méthodologie et un environnement de conception homogène pour un tel système. Cela repose sur l’application d’une méthodologie de conception moderne, en particulier la conception orientée objet (OOD), au domaine des systèmes embarqués. Notre choix de OOD est motivé par la productivité avérée de cette méthodologie pour le développement des systèmes logiciels. Dans le cadre de cette thèse, nous visons à utiliser OOD pour développer un environnement de conception homogène pour les systèmes de type Edge Computing. Notre approche aborde trois problèmes de conception: (1) la conception matérielle, où les principes orientés objet et les patrons de conception sont utilisés pour améliorer la réutilisation, l’adaptabilité et l’extensibilité du système matériel. (2) la co-conception matériel/logiciel, pour laquelle nous proposons une utilisation de OOD afin d’abstraire l’intégration et la communication entre matériel et logiciel, ce qui encourage la modularité et la flexibilité du système. (3) la conception d’un intergiciel pour l’Edge Computing. Ainsi il est possible de reposer sur un environnement de développement centralisé des applications distribuées† tandis ce que l’intergiciel facilite l’intégration des nœuds périphériques dans le réseau, et en permet la reconfiguration automatique à distance. Au final, notre solution offre une flexibilité logicielle pour la mise en oeuvre d’algorithmes distribués complexes, et permet la pleine exploitation des performances des FPGAs. Ceux-ci sont placés dans les nœuds, au plus près de l’acquisition des données par les capteurs, pour déployer un premier traitement intensif efficace
Cloud computing is often the most referenced computational model for Internet of Things. This model adopts a centralized architecture where all sensor data is stored and processed in a sole location. Despite of many advantages, this architecture suffers from a low scalability while the available data on the network is continuously increasing. It is worth noting that, currently, more than 50% internet connections are between things. This can lead to the reliability problem in realtime and latency-sensitive applications. Edge-computing which is based on a decentralized architecture, is known as a solution for this emerging problem by: (1) reinforcing the equipment at the edge (things) of the network and (2) pushing the data processing to the edge.Edge-centric computing requires sensors nodes with more software capability and processing power while, like any embedded systems, being constrained by energy consumption. Hybrid hardware systems consisting of FPGA and processor offer a good trade-off for this requirement. FPGAs are known to enable parallel and fast computation within a low energy budget. The coupled processor provides a flexible software environment for edge-centric nodes.Applications design for such hybrid network/software/hardware (SW/HW) system always remains a challenged task. It covers a large domain of system level design from high level software to low-level hardware (FPGA). This result in a complex system design flow and involves the use of tools from different engineering domains. A common solution is to propose a heterogeneous design environment which combining/integrating these tools together. However the heterogeneous nature of this approach can pose the reliability problem when it comes to data exchanges between tools.Our motivation is to propose a homogeneous design methodology and environment for such system. We study the application of a modern design methodology, in particular object-oriented design (OOD), to the field of embedded systems. Our choice of OOD is motivated by the proven productivity of this methodology for the development of software systems. In the context of this thesis, we aim at using OOD to develop a homogeneous design environment for edge-centric systems. Our approach addresses three design concerns: (1) hardware design where object-oriented principles and design patterns are used to improve the reusability, adaptability, and extensibility of the hardware system. (2) hardware / software co-design, for which we propose to use OOD to abstract the SW/HW integration and the communication that encourages the system modularity and flexibility. (3) middleware design for Edge Computing. We rely on a centralized development environment for distributed applications, while the middleware facilitates the integration of the peripheral nodes in the network, and allows automatic remote reconfiguration. Ultimately, our solution offers software flexibility for the implementation of complex distributed algorithms, complemented by the full exploitation of FPGAs performance. These are placed in the nodes, as close as possible to the acquisition of the data by the sensors† in order to deploy a first effective intensive treatment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Miccoli, Roberta. „Implementation of a complete sensor data collection and edge-cloud communication workflow within the WeLight project“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22563/.

Der volle Inhalt der Quelle
Annotation:
This thesis aims at developing the full workflow of data collection from a laser sensor connected to a mobile application, working as edge device, which subsequently transmits the data to a Cloud platform for analysing and processing. The project is part of the We Light (WErable LIGHTing for smart apparels) project, in collaboration with TTLab of the INFN (National Institute of Nuclear Physics). The goal of We Light is to create an intelligent sports shirt, equipped with sensors that take information from the external environment and send it to a mobile device. The latter then sends the data via an application to an open source Cloud platform in order to create a real IoT system. The smart T-shirt is capable of emitting different levels of light depending on the perceived external light, with the aim of ensuring greater safety for road sports people. The thesis objective is to employ a prototype board provided by the CNR-IMAMOTER to collect data and send it to the specially created application via Bluetooth Low Energy connection. Furthermore, the connection between the edge device and the Thingsboard IoT platform is performed via MQTT protocol. Several device authentication techniques are implemented on TB and a special dashboard is created to display data from the IoT device; the user is also able to view data in numerical and even graphical form directly in the application without necessarily having to access TB. The app created is useful and versatile and can be adapted to be used for other IoT purposes, not only within the We Light project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hvizdák, Lukáš. „Systém sběru dat v průmyslu“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413269.

Der volle Inhalt der Quelle
Annotation:
The master thesis focuses on the design and implementation of data collection from production using a PLC into an SQL database located in the cloud and subsequent visualization. The work describes the applicable communication protocols MQTT and OPC UA with the fact that the protocol MQTT was selected. It deals with securing data transfer from the line to the cloud using the TLS protocol. The individual cloud services and their possibilities for data collection are described here. The work deals with the possibilities of data visualization using existing open source solutions and the differences between them. I describe the possibilities of modifying the open source environment of the Grafany project. Real dashboards from production are presented. The data collection system was deployed in two plants for testing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Cantamaglia, Carlo. „Progettazione e sviluppo di un sistema di aggregazione dati per applicazioni WoT in uno scenario di monitoraggio strutturale“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22866/.

Der volle Inhalt der Quelle
Annotation:
Il monitoraggio della salute strutturale (SHM) sta diventando un argomento di ricerca cruciale per migliorare la sicurezza umana e ridurre i costi di manutenzione. Nonostante la forte crescita delle applicazioni Web of Things in quest'ambito restano ancora irrisolti problemi come il pre-processamento di dati e la migrazione di servizi tra Cloud ed Edge. Il seguente lavoro offre una soluzione ai suddetti problemi, progettando e sviluppando un sistema di aggregazione WoT che sia in grado di calcolare metriche e attuare politiche di migrazioni diverse in base al caso specifico. Infine, il sistema è stato inserito all'interno di un progetto più ampio (Mac4Pro) nel quale sono stati valutati il funzionamento e i risultati ottenuti.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Li, Hengsha. „Real-time Cloudlet PaaS for GreenIoT : Design of a scalable server PaaS and a GreenIoT application“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239004.

Der volle Inhalt der Quelle
Annotation:
Cloudlet is a recent topic that has attained much interest in network system research. It can be characterized as a PaaS (Platform as a Service) layer that allows mobile clients to execute their code in the cloud. Cloudlet can be seen as a layer at the edge of the communication network.In this thesis, we present a cloudlet architecture design which includes cloudlet code as a part of client application itself. We first provide an overview of related work and describe existing challenges which need to be addressed. Next, we present an overview design for a cloudlet-based implementation. Finally, we present the cloudlet architecture including a prototype of both client application and cloudlet server. For the prototype of a CO2 data visualization application, we focus on how to format the functions in client side, how to schedule cloudlet PaaS in the server, and how to make the server scalable. Finally, we conclude with a performance evaluation.Cloudlet technology is likely to be widely applied in IoT projects, such as data visualization of air quality and water quality, for fan control and traffic steering, or other use cases. Compared to the traditional centralized cloud architecture, cloudlet has high responsiveness, flexibility and scalability.
Cloudlet är en ny teknik som har fått stort intresse inom nätverksforskning. Tekniken kan beskrivas som en PaaS-plattform (Platform as a Service) som tillåter mobila klienter att exekvera sin kod i molnet. Cloudlet kan ses som ett lager i kanten av kommunikationsnätet.I denna rapport presenteras en cloudlet-baserad arkitektur som inkluderar cloudlet-kod som en del av själva tillämpning på klient-sidan. Vi ger först en översikt av relaterat arbete inom området och beskriver existerande utmaningar som behöver adresseras. Därefter presenterar vi en övergripande design för en cloudlet-baserad implementering. Slutligen presenterar vi cloudlet-arkitekturen, inklusive en prototypimplementation av både klient-tillämpning och cloudlet-server. I vår prototyp av en datavisualiseringstillämpning för CO2, fokuserar vi på hur man formaterar funktionerna på klientsidan, hur man schemalägger cloudlet-PaaS på serversidan, samt hur servern kan göras skalbar. Rapporten avslutas med en prestandautvärdering.Cloudlet-tekniken bedöms i stor utsträckning att användas i IoT-projekt, såsom datavisualisering av luftkvalitet och vattenkvalitet, fläktstyrning och trafikstyrning eller andra användningsområden. Jämfört med den traditionella centraliserade molnarkitekturen har cloudlet hög respons, flexibilitet och skalbarhet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Jouni, Zalfa. „Analog spike-based neuromorphic computing for low-power smart IoT applications“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST114.

Der volle Inhalt der Quelle
Annotation:
Avec l'expansion de l'Internet des objets (IoT) et l'augmentation des appareils connectés et des communications complexes, la demande de technologies de localisation précises et économes en énergie s'est intensifiée. Les techniques traditionnelles de machine learning et d'intelligence artificielle (IA) offrent une haute précision dans la localisation par radiofréquence (RF), mais au prix d'une complexité accrue et d'une consommation d'énergie élevée. Pour relever ces défis, cette thèse explore le potentiel de l'informatique neuromorphique, inspirée par les mécanismes du cerveau, pour permettre une localisation RF basée sur l'IA et économe en énergie. Elle présente un système neuromorphique analogique à base de pointes (RF NeuroAS), avec une version simplifiée entièrement implémentée en technologie BiCMOS 55 nm. Ce système identifie les positions des sources dans une plage de 360 degrés sur un plan bidimensionnel, en maintenant une haute résolution (10 ou 1 degré) même dans des conditions bruyantes. Le cœur de ce système, un réseau de neurones à impulsions basé sur l'analogique (A-SNN), a été formé et testé sur des données simulées (SimLocRF) à partir de MATLAB et des données expérimentales (MeasLocRF) provenant de mesures en chambre anéchoïque, tous deux développés dans cette thèse. Les algorithmes d'apprentissage pour l'A-SNN ont été développés selon deux approches: l'apprentissage profond (DL) et la plasticité dépendante du timing des impulsions (STDP) bio-plausible.RF NeuroAS atteint une précision de localisation de 97,1% avec SimLocRF et de 90,7% avec MeasLoc à une résolution de 10 degrés, tout en maintenant une haute performance avec une faible consommation d'énergie de l'ordre du nanowatt. Le RF NeuroAS simplifié consomme seulement 1.1 nW et fonctionne dans une plage dynamique de 30 dB. L'apprentissage de l'A-SNN, via DL et STDP, a démontré des performances sur les problèmes XOR et MNIST.Le DL dépend de la non-linéarité des fonctions de transfert post-layout des neurones et des synapses de l'A-SNN, tandis que le STDP dépend du bruit aléatoire dans les circuits neuronaux analogiques. Ces résultats marquent des avancées dans les applications IoT économes en énergie grâce à l'informatique neuromorphique, promettant des percées dans l'IoT intelligent à faible consommation d'énergie inspirées par les mécanismes du cerveau
As the Internet of Things (IoT) expands with more connected devices and complex communications, the demand for precise, energy-efficient localization technologies has intensified. Traditional machine learning and artificial intelligence (AI) techniques provide high accuracy in radio-frequency (RF) localization but often at the cost of greater complexity and power usage. To address these challenges, this thesis explores the potential of neuromorphic computing, inspired by brain functionality, to enable energy-efficient AI-based RF localization. It introduces an end-to-end analog spike-based neuromorphic system (RF NeuroAS), with a simplified version fully implemented in BiCMOS 55 nm technology. RF NeuroAS is designed to identify source positions within a 360-degree range on a two-dimensional plane, maintaining high resolution (10 or 1 degree) even in noisy conditions. The core of this system, an analog-based spiking neural network (A-SNN), was trained and tested on a simulated dataset (SimLocRF) from MATLAB and an experimental dataset (MeasLocRF) from anechoic chamber measurements, both developed in this thesis.The learning algorithms for A-SNN were developed through two approaches: software-based deep learning (DL) and bio-plausible spike-timing-dependent plasticity (STDP). RF NeuroAS achieves a localization accuracy of 97.1% with SimLocRF and 90.7% with MeasLoc at a 10-degree resolution, maintaining high performance with low power consumption in the nanowatt range. The simplified RF NeuroAS consumes just over 1.1 nW and operates within a 30 dB dynamic range. A-SNN learning, via DL and STDP, demonstrated performance on XOR and MNIST problems. DL depends on the non-linearity of post-layout transfer functions of A-SNN's neurons and synapses, while STDP depends on the random noise in analog neuron circuits. These findings highlight advancements in energy-efficient IoT through neuromorphic computing, promising low-power smart edge IoT breakthroughs inspired by brain mechanisms
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Feraudo, Angelo. „Distributed Federated Learning in Manufacturer Usage Description (MUD) Deployment Environments“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Den vollen Inhalt der Quelle finden
Annotation:
Il costante avanzamento dei dispositivi Internet of Things (IoT) in diversi ambienti, ha provocato la necessità di nuovi meccanismi di sicurezza e monitoraggio in una rete. Tali dispositvi sono spesso considerati fonti di vulnerabilità sfruttabili da malintenzionati per accedere alla rete o condurre altri attacchi. Questo è dovuto alla natura stessa dei dispositivi, ovvero offrire servizi aventi a che fare con dati sensibili (p.es. videocamere) seppur con risorse molto limitate. Una soluzione in questa direzione, è l'impiego della specifica Manufacturer Usage Description (MUD), che impone al maufacturer dei dispositivi di fornire dei file contenenti un particolare pattern di comunicazione che i dispositivi da lui prodotti dovranno adottare. Tuttavia, tale specifica riduce solo parzialmente le suddette vulnerabilità. Infatti, diventa inverosimile definire un pattern di comunicazione per dispositivi IoT aventi un traffico di rete molto generico (p.es. Alexa). Perciò, è di grande interesse studiare un sistema di anomaly detection basato su tecniche di machine learning, che riesca a colmare tali vulnerabilità. In questo lavoro, verranno esplorate tre prototipi di implementazione della specifica MUD, che si concluderà con la scelta di una tra queste. Successivamente, verrà prodotta una Proof-of-Concept uniforme a tale specifica, contenente un'ulteriore entità in grado di fornire maggiore autorità all'amministratore di rete in quest'ambiente. In una seconda fase, verrà analizzata un'architettura distribuita che riesca ad effettuare learning di anomalie direttamente sui dispositivi sfruttando il concetto di Federated Learning, il che significa garantire la privacy dei dati. L'idea fondamentale di questo lavoro è quindi quella di proporre un'architettura basata su queste due nuove tecnologie, in grado di ridurre al minimo vulnerabilità proprie dei dispositivi IoT in un ambiente distribuito garantendo il più possibile la privacy dei dati.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Maltoni, Pietro. „Progetto di un acceleratore hardware per layer di convoluzioni depthwise in applicazioni di Deep Neural Network“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24205/.

Der volle Inhalt der Quelle
Annotation:
Il progressivo sviluppo tecnologico e il costante monitoraggio, controllo e analisi della realtà circostante ha condotto allo sviluppo di dispositivi IoT sempre più performanti, per questo si è iniziato a parlare di Edge Computing. In questi dispositivi sono presenti le risorse per elaborare i dati dai sensori direttamente in locale. Questa tecnologia si adatta bene alle CNN, reti neurali per l'analisi e il riconoscimento di immagini. Le Separable Convolution rappresentano una nuova frontiera perchè permettono di diminuire in modo massiccio la quantità di operazioni da eseguire su tensori di dati dividendo la convoluzione in due parti: una Depthwise e una Pointwise. Tutto questo porta a risultati molto affidabili in termini di accuratezza e velocità ma è sempre centrale il problema legato al consumo di potenza in quanto i dispositivi si affidano solamente ad una batteria interna. Per questo è necessario avere un buon trade-off tra consumi e capacità computazionale. Per rispondere a questa sfida tecnologica lo stato dell'arte in questo ambito propone soluzioni diverse, composte da cluster con core ottimizzati e istruzioni dedicate o FPGA. In questa tesi proponiamo un acceleratore hardware sviluppato in PULP orientato al calcolo di layer di convoluzioni Depthwise. Grazie ad una logica HWC dei dati in memoria e al Window Buffer, una finestra che trasla sull'immagine per effettuare le convoluzioni canale per canale è stato possibile sviluppare una architettura del datapath orientata al riuso dei dati; questo porta l’acceleratore ad avere come risultato in uscita uno throughput massimo di 4 pixel per ciclo di clock. Con le performance di 6 GOP/s, un' efficienza energetica di 101 GOP/j e un consumo di potenza nell'ordine dei mW, dati ottenuti attraverso l'integrazione dell'IP all'interno del cluster di Darkside, nuovo chip di ricerca con tecnologia TSCM a 65 nm, l'acceleratore Depthwise si candida ad essere una soluzione ideale per questo tipo di applicazioni.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Gandolfi, Riccardo. „Design of a memory-to-memory tensor reshuffle unit for ultra-low-power deep learning accelerators“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23706/.

Der volle Inhalt der Quelle
Annotation:
In the context of IoT edge-processing, deep learning applications and near-sensor analytics, the constraints on having low area occupation and low power consumption in MCUs (Microcontroller Units) performing computationally intensive tasks are more stringent than ever. A promising direction is to develop HWPEs (Hardware Processing Engines) that support and help the end-node in the execution of these tasks. The following work concerns the design and testing of the Datamover, a small and easily configurable HWPE for tensor shuffling and data marshaling operation. The accelerator is to be integrated within the Darkside PULP chip and can perform reordering operations and transpositions on data with different sub-byte widths. The focus is on the design of the internal buffering and transposition mechanism and its performance when compared to a software on-platform execution. Also, synthesis results will be shown in terms of area occupation and timing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Rahafrouz, Amir. „Distributed Orchestration Framework for Fog Computing“. Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-77118.

Der volle Inhalt der Quelle
Annotation:
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Sinigaglia, Mattia. „Progettazione ed implementazione di un Sistema On Chip per applicazioni audio“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23790/.

Der volle Inhalt der Quelle
Annotation:
Lo scopo de progetto è stato quello di contribuire alla realizzazione di un microcontrollore progettato per applicazioni audio con bassissimi consumi. Il microcontrollore integra un acceleratore FFT che effettua la trasformata di Fourier su diversi segnali audio acquisiti dalla periferica I2S che è una periferica dedicata alla comunicazione con interfacce audio digitali. Nello specifico, è stato implementato nella periferica I2S il protocollo DSP con TDM per consentire la connessione di molteplici dispositivi sulla stessa linea dati. Il risultato ottenuto è stato quello di riuscire a comunicare contemporaneamente con 16 dispositivi di input e di output fornendo l’elaborazione effettuata dall’acceleratore FFT sui dati acquisiti. Il microcontrollore, basato su PULP, prende il nome di Echoes come tributo ai Pink Floyd perché specifico per applicazioni audio ed è dotato di un largo set di periferiche che gli consentono di comunicare con il mondo esterno. L'elaborato è suddiviso in due parti, la prima introduce all'edge processing ed ai protocolli audio digitali, la seconda invece descrive le fasi di integrazione del nuovo protocollo DSP nella periferica I2S in PULP, la progettazione e l'implementazione fisica del chip Echoes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Busacca, Fabio Antonino. „AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge“. Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.

Der volle Inhalt der Quelle
Annotation:
5G-and-beyond and Internet of Things (IoT) technologies are pushing a shift from the classic cloud-centric view of the network to a new edge-centric vision. In such a perspective, the computation, communication and storage resources are moved closer to the user, to the benefit of network responsiveness/latency, and of an improved context-awareness, that is, the ability to tailor the network services to the live user's experience. However, these improvements do not come for free: edge networks are highly constrained, and do not match the resource abundance of their cloud counterparts. In such a perspective, the proper management of the few available resources is of crucial importance to improve the network performance in terms of responsiveness, throughput, and power consumption. However, networks in the so-called Age of Big Data result from the dynamic interactions of massive amounts of heterogeneous devices. As a consequence, traditional model-based Resource Allocation algorithms fail to cope with this dynamic and complex networks, and are being replaced by more flexible AI-based techniques as a result. In such a way, it is possible to design intelligent resource allocation frameworks, able to quickly adapt to the everchanging dynamics of the network edge, and to best exploit the few available resources. Hence, Artificial Intelligence (AI), and, more specifically Machine Learning (ML) techniques, can clearly play a fundamental role in boosting and supporting resource allocation techniques at the edge. But can AI/ML benefit from optimal Resource Allocation? Recently, the evolution towards Distributed and Federated Learning approaches, i.e. where the learning process takes place in parallel at several devices, has brought important advantages in terms of reduction of the computational load of the ML algorithms, in the amount of information transmitted by the network nodes, and in terms of privacy. However, the scarceness of energy, processing, and, possibly, communication resources at the edge, especially in the IoT case, calls for proper resource management frameworks. In such a view, the available resources should be assigned to reduce the learning time, while also keeping an eye on the energy consumption of the network nodes. According to this perspective, a two-fold paradigm can emerge at the network edge, where AI can boost the performance of Resource Allocation, and, vice versa, optimal Resource Allocation techniques can speed up the learning process of AI algorithms. Part I of this work of thesis explores the first topic, i.e. the usage of AI to support Resource Allocation at the edge, with a specific focus on two use-cases, namely UAV-assisted cellular networks, and vehicular networks. Part II deals instead with the topic of Resource Allocation for AI, and, specifically, with the case of the integration between Federated Learning techniques and the LoRa LPWAN protocol. The designed integration framework has been validated on both simulation environments, and, most importantly, on the Colosseum platform, the biggest channel emulator in the world.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Kheffache, Mansour. „Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained Devices“. Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76394.

Der volle Inhalt der Quelle
Annotation:
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Djemai, Ibrahim. „Joint offloading-scheduling policies for future generation wireless networks“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS007.

Der volle Inhalt der Quelle
Annotation:
Les défis posés par le nombre croissant d'appareils connectés, la forte consommation d'énergie et l'impact environnemental dans les réseaux sans fil d'aujourd'hui et de demain retiennent de plus en plus l'attention. De nouvelles technologies telles que le cloud mobile de périphérie (Mobile Edge Computing) ont vu le jour pour rapprocher les services en nuage des appareils et remédier à leurs limitations en matière de calcul. Le fait de doter ces appareils et les nœuds du réseau de capacités de récolte d'énergie (Energy Harvesting) est également prometteur pour permettre de consommer de l'énergie à partir de sources durables et respectueuses de l'environnement. En outre, l'accès multiple non orthogonal (Non-Orthogonal Multiple Access) est une technique essentielle pour améliorer l'efficacité spectral mobile. Avec l'aide des progrès de l'intelligence artificielle, en particulier des modèles d'apprentissage par renforcement (Reinforcement Learning), le travail de thèse porte sur la conception de politiques qui optimisent conjointement l'ordonnancement et la décharge de calcul pour les appareils dotés de capacités EH, les communications compatibles avec le NOMA et l'accès MEC. En outre, lorsque le nombre d'appareils augmente et que la complexité du système s'accroît, le regroupement NOMA est effectué et l'apprentissage fédéré (Federated Learning) est utilisé pour produire des politiques RL de manière distribuée. Les résultats de la thèse valident la performance des politiques RL proposées, ainsi que l'intérêt de l'utilisation de la technique NOMA
The challenges posed by the increasing number of connected devices, high energy consumption, and environmental impact in today's and future wireless networks are gaining more attention. New technologies like Mobile Edge Computing (MEC) have emerged to bring cloud services closer to the devices and address their computation limitations. Enabling these devices and the network nodes with Energy Harvesting (EH) capabilities is also promising to allow for consuming energy from sustainable and environmentally friendly sources. In addition, Non-Orthogonal Multiple Access (NOMA) is a pivotal technique to achieve enhanced mobile broadband. Aided by the advancement of Artificial Intelligence, especially Reinforcement Learning (RL) models, the thesis work revolves around devising policies that jointly optimize scheduling and computational offloading for devices with EH capabilities, NOMA-enabled communications, and MEC access. Moreover, when the number of devices increases and so does the system complexity, NOMA clustering is performed and Federated Learning is used to produce RL policies in a distributed way. The thesis results validate the performance of the proposed RL-based policies, as well as the interest of using NOMA technique
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie