Tesis sobre el tema "Edge artificial intelligence"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Edge artificial intelligence.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 29 mejores tesis para su investigación sobre el tema "Edge artificial intelligence".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Antonini, Mattia. "From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texto completo
Resumen
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Antonini, Mattia. "From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texto completo
Resumen
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Abernot, Madeleine. "Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.

Texto completo
Resumen
Au cours des dernières décennies, la multiplication des objets embarqués dans de nombreux domaines a considérablement augmenté la quantité de données à traiter et la complexité des tâches à résoudre, motivant l'émergence d'algorithmes probabilistes d'apprentissage tels que l'intelligence artificielle (IA) et les réseaux de neurones artificiels (ANN). Cependant, les systèmes matériels pour le calcul embarqué basés sur l'architecture von Neuman ne sont pas efficace pour traiter cette quantité de données. C'est pourquoi des paradigmes neuromorphiques dotés d'une mémoire distribuée sont étudiés, s'inspirant de la structure et de la représentation de l'information des réseaux de neurones biologiques. Dernièrement, la plupart de la recherche autour des paradigmes neuromorphiques ont exploré les réseaux de neurones à impulsion ou spiking neural networks (SNNs), qui s'inspirent des impulsions utilisées pour transmettre l'information dans les réseaux biologiques. Les SNNs encodent l'information temporellement à l'aide d'impulsions pour assurer un calcul de données continues naturel et à faible énergie. Récemment, les réseaux de neurones oscillatoires (ONN) sont apparu comme un paradigme neuromorphique alternatif pour du calcul temporel, rapide et efficace, à basse consommation. Les ONNs sont des réseaux d'oscillateurs couplés qui émulent les propriétés de calcul collectif des zones du cerveau par le biais d'oscillations. Les récentes implémentations d'ONN combinées à l'émergence de composants compacts à faible consommation d'énergie encouragent le développement des ONNs pour le calcul embarqué. L’état de l’art de l'ONN le configure comme un réseau de Hopfield oscillatoire (OHN) avec une architecture d’oscillateurs entièrement couplés pour effectuer de la reconnaissance de formes avec une précision limitée. Cependant, le grand nombre de synapses de l'architecture limite l’implémentation de larges ONNs et le champs des applications de l'ONN. Cette thèse se concentre pour étudier si et comment l'ONN peut résoudre des applications significatives d'IA embarquée à l'aide d'une preuve de concept de l'ONN implémenté en digital sur FPGA. Tout d'abord, ce travail explore de nouveaux algorithmes d'apprentissages pour OHN, non supervisé et supervisé, pour améliorer la précision et pour intégrer de l'apprentissage continu sur puce. Ensuite, cette thèse étudie de nouvelles architectures pour l'ONN en s'inspirant des architectures en couches des ANNs pour créer dans un premier temps des couches d'OHN en cascade puis des réseaux ONN multi-couche. Les nouveaux algorithmes d'apprentissage et les nouvelles architectures sont démontrées avec l'ONN digital pour des applications d'IA embarquée, telles que pour la robotique avec de l'évitement d'obstacles et pour le traitement d'images avec de la reconnaissance de formes, de la détection de contour, de l'extraction d'amers, ou de la classification
In the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Hasanaj, Enis, Albert Aveler y William Söder. "Cooperative edge deepfake detection". Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Texto completo
Resumen
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

WoldeMichael, Helina Getachew. "Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and Performance". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17267.

Texto completo
Resumen
IoT has become tremendously popular. It provides information access, processing and connectivity for a huge number of devices or sensors. IoT systems, however, often do not process the information locally, rather send the information to remote locations in the Cloud. As a result, it adds huge amount of data traffic to the network and additional delay to data processing. The later feature might have significant impact on applications that require fast response times, such as sophisticated artificial intelligence (AI) applications including Augmented reality, face recognition, and object detection. Consequently, edge computing paradigm that enables computation of data near the source has gained a significant importance in achieving a fast response time in the recent years. IoT devices can be employed to provide computational resources at the edge of the network near the sensors and actuators. The aim of this thesis work is to design and implement a kind of edge computing concept that brings AI models to a small embedded IoT device by the use of virtualization concepts. The use of virtualization technology enables the easy packing and shipping of applications to different hardware platforms. Additionally, this enable the mobility of AI models between edge devices and the Cloud. We will implement an AI model inside a Docker container, which will be deployed on a FireflyRK3399 single-board computer (SBC). Furthermore, we will conduct CPU and memory performance evaluations of Docker on Firefly-RK3399. The methodology adopted to reach to our goal is experimental research. First, different literatures have been studied to demonstrate by implementation the feasibility of our concept. Then we setup an experiment that covers measurement of performance metrics by applying synthetic load in multiple scenarios. Results are validated by repeating the experiment and statistical analysis. Results of this study shows that, an AI model can successfully be deployed and executed inside a Docker container on Arm-Cortex-based single-board computer. A Docker image of OpenFace face recognition model is built for ARM architecture of the Firefly SBC. On the other hand, the performance evaluation reveals that the performance overhead of Docker in terms of CPU and Memory is negligible. The research work comprises the mechanisms how AI application can be containerized in ARM architecture. We conclude that the methods can be applied to containerize software application in ARM based IoT devices. Furthermore, the insignificant overhead brought by Docker facilitates for deployment of applications inside a container with less performance overhead. The functionality of IoT device i.e. Firefly-RK3399 is exploited in this thesis. It is shown that the device is capable and powerful and gives an insight for further studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

PELUSO, VALENTINO. "Optimization Tools for ConvNets on the Edge". Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2845792.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Laroui, Mohammed. "Distributed edge computing for enhanced IoT devices and new generation network efficiency". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7078.

Texto completo
Resumen
Dans le cloud computing, les services et les ressources sont centralisés dans des centres de données auxquels l’utilisateur peut accéder à partir de ses appareils connectés. L’infrastructure cloud traditionnelle sera confrontée à une série de défis en raison de la centralisation de calcul, du stockage et de la mise en réseau dans un petit nombre de centres de données, et de la longue distance entre les appareils connectés et les centres de données distants. Pour répondre à ce besoin, l’edge computing s’appuie sur un modèle dans lequel les ressources de calcul sont distribuées dans le edge de réseau selon les besoins, tout en décentralisant le traitement des données du cloud vers le edge autant que possible. Ainsi, il est possible d’avoir rapidement des informations exploitables basées sur des données qui varient dans le temps. Dans cette thèse, nous proposons de nouveaux modèles d’optimisation pour optimiser l’utilisation des ressources dans le edge de réseau pour deux domaines de recherche de l’edge computing, le "service offloading" et "vehicular edge computing". Nous étudions différents cas d’utilisation dans chaque domaine de recherche. Pour les solutions optimales, Premièrement, pour le "service offloading", nous proposons des algorithmes optimaux pour le placement des services dans les serveurs edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) en tenant compte des contraintes de ressources de calcul. De plus, pour "vehicular edge computing", nous proposons des modèles exacts liés à la maximisation de la couverture des véhicules par les taxis et les Unmanned Aerial Vehicle (UAV) pour les applications de streaming vidéo en ligne. De plus, nous proposons un edge- autopilot VNFs offloading dans le edge de réseau pour la conduite autonome. Les résultats de l’évaluation montrent l’efficacité des algorithmes proposés dans les réseaux avec un nombre limité d’appareils en termes de temps, de coût et d’utilisation des ressources. Pour faire face aux réseaux denses avec un nombre élevé d’appareils et des problèmes d’évolutivité, nous proposons des algorithmes à grande échelle qui prennent en charge une énorme quantité d’appareils, de données et de demandes d’utilisateurs. Des algorithmes heuristiques sont proposés pour l’orchestration SFC, couverture maximale des serveurs edge mobiles (véhicules). De plus, les algorithmes d’intelligence artificielle (apprentissage automatique, apprentissage en profondeur et apprentissage par renforcement en profondeur) sont utilisés pour le placement des "5G VNF slices", le placement des "VNF-autopilot" et la navigation autonome des drones. Les résultats numériques donnent de bons résultats par rapport aux algorithmes exacts avec haute efficacité en temps
Traditional cloud infrastructure will face a series of challenges due to the centralization of computing, storage, and networking in a small number of data centers, and the long-distance between connected devices and remote data centers. To meet this challenge, edge computing seems to be a promising possibility that provides resources closer to IoT devices. In the cloud computing model, compute resources and services are often centralized in large data centers that end-users access from the network. This model has an important economic value and more efficient resource-sharing capabilities. New forms of end-user experience such as the Internet of Things require computing resources near to the end-user devices at the network edge. To meet this need, edge computing relies on a model in which computing resources are distributed to the edge of a network as needed, while decentralizing the data processing from the cloud to the edge as possible. Thus, it is possible to quickly have actionable information based on data that varies over time. In this thesis, we propose novel optimization models to optimize the resource utilization at the network edge for two edge computing research directions, service offloading and vehicular edge computing. We study different use cases in each research direction. For the optimal solutions, First, for service offloading we propose optimal algorithms for services placement at the network edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) by taking into account the computing resources constraints. Moreover, for vehicular edge computing, we propose exact models related to maximizing the coverage of vehicles by both Taxis and Unmanned Aerial Vehicle (UAV) for online video streaming applications. In addition, we propose optimal edge-autopilot VNFs offloading at the network edge for autonomous driving. The evaluation results show the efficiency of the proposed algorithms in small-scale networks in terms of time, cost, and resource utilization. To deal with dense networks with a high number of devices and scalability issues, we propose large-scale algorithms that support a huge amount of devices, data, and users requests. Heuristic algorithms are proposed for SFC orchestration, maximum coverage of mobile edge servers (vehicles). Moreover, The artificial intelligence algorithms (machine learning, deep learning, and deep reinforcement learning) are used for 5G VNF slices placement, edge-autopilot VNF placement, and autonomous UAV navigation. The numerical results give good results compared with exact algorithms with high efficiency in terms of time
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

MAZZIA, VITTORIO. "Machine Learning Algorithms and their Embedded Implementation for Service Robotics Applications". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2968456.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Labouré, Iooss Marie-José. "Faisabilité d'une carte électronique d'opérateurs de seuillage : déformation d'objets plans lors de transformations de type morphologique". Saint-Etienne, 1987. http://www.theses.fr/1987STET4014.

Texto completo
Resumen
Etude de la segmentation d'une image et plus particulièrement du seuillage d'une image. Classification de formes planes. De nombreux algorithmes sont présentés. La plupart sont fondés sur la connaissance de l'histogramme des niveaux de gris. Une carte électronique de seuillage a été développée. Des méthodes originales de détection de contours sont aussi explicitées. Dans une deuxième partie, une étude sur les déformations d'objets plans après dilatations successives est présentée
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Busacca, Fabio Antonino. "AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge". Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.

Texto completo
Resumen
5G-and-beyond and Internet of Things (IoT) technologies are pushing a shift from the classic cloud-centric view of the network to a new edge-centric vision. In such a perspective, the computation, communication and storage resources are moved closer to the user, to the benefit of network responsiveness/latency, and of an improved context-awareness, that is, the ability to tailor the network services to the live user's experience. However, these improvements do not come for free: edge networks are highly constrained, and do not match the resource abundance of their cloud counterparts. In such a perspective, the proper management of the few available resources is of crucial importance to improve the network performance in terms of responsiveness, throughput, and power consumption. However, networks in the so-called Age of Big Data result from the dynamic interactions of massive amounts of heterogeneous devices. As a consequence, traditional model-based Resource Allocation algorithms fail to cope with this dynamic and complex networks, and are being replaced by more flexible AI-based techniques as a result. In such a way, it is possible to design intelligent resource allocation frameworks, able to quickly adapt to the everchanging dynamics of the network edge, and to best exploit the few available resources. Hence, Artificial Intelligence (AI), and, more specifically Machine Learning (ML) techniques, can clearly play a fundamental role in boosting and supporting resource allocation techniques at the edge. But can AI/ML benefit from optimal Resource Allocation? Recently, the evolution towards Distributed and Federated Learning approaches, i.e. where the learning process takes place in parallel at several devices, has brought important advantages in terms of reduction of the computational load of the ML algorithms, in the amount of information transmitted by the network nodes, and in terms of privacy. However, the scarceness of energy, processing, and, possibly, communication resources at the edge, especially in the IoT case, calls for proper resource management frameworks. In such a view, the available resources should be assigned to reduce the learning time, while also keeping an eye on the energy consumption of the network nodes. According to this perspective, a two-fold paradigm can emerge at the network edge, where AI can boost the performance of Resource Allocation, and, vice versa, optimal Resource Allocation techniques can speed up the learning process of AI algorithms. Part I of this work of thesis explores the first topic, i.e. the usage of AI to support Resource Allocation at the edge, with a specific focus on two use-cases, namely UAV-assisted cellular networks, and vehicular networks. Part II deals instead with the topic of Resource Allocation for AI, and, specifically, with the case of the integration between Federated Learning techniques and the LoRa LPWAN protocol. The designed integration framework has been validated on both simulation environments, and, most importantly, on the Colosseum platform, the biggest channel emulator in the world.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Hachouf, Fella. "Télédétection des contours linéaires". Rouen, 1988. http://www.theses.fr/1988ROUES027.

Texto completo
Resumen
Etude du problème de la détection des contours tel qu'il se pose par exemple en détection automatique des réseaux routiers ou fluviaux sur des images issues de photographies aériennes. Les principes de base du traitement d'image sont passés en revue. Les différentes méthodes de filtrage sont ensuite abordées. La solution proposée consiste dans la squelettisation homologique de l'image du module du gradient, ce qui permet l'extraction des structures à reconnaître par transformation du squelette en graphe
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Delort, François. "Systeme d'approximation polygonale des contours pour application en vision industrielle". Clermont-Ferrand 2, 1988. http://www.theses.fr/1988CLF2D209.

Texto completo
Resumen
Deux algorithmes sont presentes. L'un d'entre eux a ete etudie dans le detail et les performances esperees ont conduit a la realisation d'une machine d'evaluation. La realisation de cette machine a necessite un approfondissement de l'aspect systeme associe. En particulier, les interactions materiel,logiciel ainsi que l'aspect temps reel sont detailles
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

FERDINANDI, Marco. "A Learning Sensors Platform for Health and Safety Applications". Doctoral thesis, Università degli studi di Cassino, 2020. http://hdl.handle.net/11580/74754.

Texto completo
Resumen
Nowadays, people and environment health and safety are increasingly threatened by human activities. Industrial processes, or more in general, air and water contamination are worsening the planet life conditions. Increasing people awareness and improving the regulation system play a key role to face and tackle this global phenomenon. To this aim, reliable and low-cost technologies for a pervasive and ubiquitous environmental monitoring are really needed, especially for developing and poorer countries. This Ph.D. thesis has been focused on the development of the SENSIPLUS Embedded System for health and safety applications. It has been designed considering low-cost, miniaturization and low--power as main requirements for the whole developing and experimental phases. More in detail, it is designed according to Internet of Things and Edge Computing paradigms, integrating sensing, elaboration and communication capabilities. Sensing is mainly based on the SENSIPLUS chip, which is a micro analytical tool integrating heterogeneous sensors typologies. As for the elaboration and communication, embedded software based on statistical and artificial intelligence solutions is adopted for data analysis and technologies as Wi--Fi, USB and Bluetooth Low Energy have been integrated to transmit processing results. The embedded software has been tested on low--resources Micro Controller Units as the ESP32, STM32 and CC2541 manufactured by Espressif, STMicroelectronics and Texas Instrument, respectively. Three different applications have been addressed in this thesis: state of health monitoring of activated carbon filters and biofilters; contaminants detection and recognition in air; contaminants detection and recognition in water. Both the hardware and software components have been developed and customized for these three applications and real scenarios experimental activities have been conducted to test and validate the proposed solutions. Positive results have been obtained providing the validation of the developed technology for the addressed applications. The activities carried out for this thesis have different European research projects (Horizon 2020 and European Defence Agency) as background and reference. Furthermore, multiple collaborations with public and private research centers have characterized the design, developing and experimental activities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Lanzarone, Lorenzo Biagio. "Manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale: una valutazione sperimentale". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22853/.

Texto completo
Resumen
Nella società è in corso un processo di evoluzione tecnologica, il quale sviluppa una connessione tra l’ambiente fisico e l’ambiente digitale, per scambiare dati e informazioni. Nella presente tesi si approfondisce, nel contesto dell’Industria 4.0, la tematica della manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale, per prevedere in anticipo il verificarsi di un imminente guasto, identificandolo prima ancora che si possa verificare. La presente tesi è divisa in due parti complementari, nella prima parte si approfondiscono gli aspetti teorici relativi al contesto e allo stato dell’arte, mentre nella seconda parte gli aspetti pratici e progettuali. In particolare, la prima parte è dedicata a fornire una panoramica sull’Industria 4.0 e su una sua applicazione, rappresentata dalla manutenzione predittiva. Successivamente vengono affrontate le tematiche inerenti l’intelligenza artificiale e la Data Science, tramite le quali è possibile applicare la manutenzione predittiva. Nella seconda parte invece, si propone un progetto pratico, ossia il lavoro da me svolto durante un tirocinio presso la software house Open Data di Funo di Argelato (Bologna). L’obiettivo del progetto è stato la realizzazione di un sistema informatico di manutenzione predittiva di macchinari industriali per lo stampaggio plastico a iniezione, utilizzando tecniche di intelligenza artificiale. Il fine ultimo è l’integrazione di tale sistema all’interno del software Opera MES sviluppato dall’azienda.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Longo, Eugenio. "AI e IoT: contesto e stato dell’arte". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Buscar texto completo
Resumen
L’intelligenza artificiale costituisce un ramo dell’informatica che permette di programmare e progettare sistemi in grado di dotare le macchine di caratteristiche considerate tipicamente umane. L’internet delle cose rappresenta una rete di oggetti fisici in grado di connettersi e scambiare dati con altri dispositivi tramite internet. Nonostante L’internet delle cose e l’intelligenza artificiale rappresentino due concetti diversi, riescono ad integrarsi per creare nuove soluzioni con un elevato potenziale. L’utilizzo combinato di queste due tecnologie permette di aumentare il valore di entrambe le soluzioni, in quanto permette il conseguimento di dati e modelli predittivi. In questo elaborato di tesi l’obiettivo è stato quello di analizzare l’evoluzione dell’intelligenza artificiale e dell’innovazione tecnologica nella vita quotidiana. Nello specifico, l’intelligenza artificiale applicata all’internet delle cose ha preso piede nella gestione di grandi realtà come le smart city o smart mobility o nelle piccole realtà come le smart home, mettendo in rete una grande quantità di dati privati. Tuttavia, esistono ancora delle problematiche. Infatti, ad oggi non è stato ancora raggiunto il livello di sicurezza tale da poter utilizzare queste tecnologie in applicazioni più critiche. La sfida più grande nel mondo del lavoro sarà comprendere e saper sfruttare le potenzialità che il nuovo paradigma nell’utilizzo dell’intelligenza artificiale andrà a suggerire.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Desai, Ujjaval Y., Marcelo M. Mizuki, Ichiro Masaki y Berthold K. P. Horn. "Edge and Mean Based Image Compression". 1996. http://hdl.handle.net/1721.1/5943.

Texto completo
Resumen
In this paper, we present a static image compression algorithm for very low bit rate applications. The algorithm reduces spatial redundancy present in images by extracting and encoding edge and mean information. Since the human visual system is highly sensitive to edges, an edge-based compression scheme can produce intelligible images at high compression ratios. We present good quality results for facial as well as textured, 256~x~256 color images at 0.1 to 0.3 bpp. The algorithm described in this paper was designed for high performance, keeping hardware implementation issues in mind. In the next phase of the project, which is currently underway, this algorithm will be implemented in hardware, and new edge-based color image sequence compression algorithms will be developed to achieve compression ratios of over 100, i.e., less than 0.12 bpp from 12 bpp. Potential applications include low power, portable video telephones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

So, Austin G. y 蘇偉賢. "A Hierarchical Approach for Efficient Workload Allocation for Edge Artificial Intelligence". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5zcfg6.

Texto completo
Resumen
碩士
國立清華大學
資訊工程學系所
107
A critical constraint in Edge Artificial Intelligence (AI) is its limited computing power. Due to this, reliance on edge AI would result in an inevitable accuracy trade-off. One way to increase the overall accuracy is to introduce a workload allocation scheme that would assign input data requiring complex computations to a server AI while retaining simple ones at the edge AI. In order to achieve this, we utilize an authentic operation (AO) which assesses prediction confidence of the edge AI. We based our research on a previous work which uses fine-grained pair-wise thresholding. In this work, we proposed a coarse-grained cluster-wise hierarchical thresholding. Moreover, mean squared error (MSE) is used to regularize the edge AI’s prediction based on the obtained threshold data. We further modify the existing AO block by adding a second level criterion which serves as a validation layer with the aim of further reducing the transmission count. Our methodology minimizes the threshold values by 90% for a 10 class dataset and reduces data transmission by 15.20% while retaining overall accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Wong, Jun Hua. "Efficient Edge Intelligence in the Era of Big Data". Thesis, 2021. http://hdl.handle.net/1805/26385.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
Smart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health. In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

"Monocular Depth Estimation with Edge-Based Constraints and Active Learning". Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54881.

Texto completo
Resumen
abstract: The ubiquity of single camera systems in society has made improving monocular depth estimation a topic of increasing interest in the broader computer vision community. Inspired by recent work in sparse-to-dense depth estimation, this thesis focuses on sparse patterns generated from feature detection based algorithms as opposed to regular grid sparse patterns used by previous work. This work focuses on using these feature-based sparse patterns to generate additional depth information by interpolating regions between clusters of samples that are in close proximity to each other. These interpolated sparse depths are used to enforce additional constraints on the network’s predictions. In addition to the improved depth prediction performance observed from incorporating the sparse sample information in the network compared to pure RGB-based methods, the experiments show that actively retraining a network on a small number of samples that deviate most from the interpolated sparse depths leads to better depth prediction overall. This thesis also introduces a new metric, titled Edge, to quantify model performance in regions of an image that show the highest change in ground truth depth values along either the x-axis or the y-axis. Existing metrics in depth estimation like Root Mean Square Error(RMSE) and Mean Absolute Error(MAE) quantify model performance across the entire image and don’t focus on specific regions of an image that are hard to predict. To this end, the proposed Edge metric focuses specifically on these hard to classify regions. The experiments also show that using the Edge metric as a small addition to existing loss functions like L1 loss in current state-of-the-art methods leads to vastly improved performance in these hard to classify regions, while also improving performance across the board in every other metric.
Dissertation/Thesis
Masters Thesis Computer Engineering 2019
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

(11013474), Jun Hua Wong. "Efficient Edge Intelligence In the Era of Big Data". Thesis, 2021.

Buscar texto completo
Resumen
Smart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health.
In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference.

Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kumar, Ranjan. "Fault Diagnosis of Inclined Edge Cracked Cantilever Beam Using Vibrational Analysis and Artificial Intelligence Techniques". Thesis, 2014. http://ethesis.nitrkl.ac.in/6486/1/212ME1273-5.pdf.

Texto completo
Resumen
Damage is one of the vital characteristics in structural analysis because of safety cause as well as economic prosperity of the industries. The existence of cracks which influence the performance of structures as well as the vibrational parameters like modal natural frequencies, mode shapes, modal damping and stiffness. In this research paper, the effect of crack parameters (relative crack location & crack depth, and crack inclination) on the vibrational parameters of a single inclined edge crack cantilever beam are examined by different techniques using numerical method, finite element analysis (FEA), AI techniques (FUZZY inference method and Artificial Neural Network). Experimental analysis is carried out for verifying the results.Finite Element Method has been accomplished to derive the vibration signatures of the inclined cracked cantilever beam. The results obtained analytically are validated with the results obtained from the FEA. The simulations of FEA have done with the help of ANSYS software. Different artificial intelligent techniques based on Fuzzy controller and Artificial Neural Network controller have been formulated using the computed vibrational parameters for inclined edge crack identification in cantilever beam elements with more precision and significantly low computational period.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Joshi, Sanket Ramesh. "HBONext: An Efficient Dnn for Light Edge Embedded Devices". Thesis, 2021. http://dx.doi.org/10.7912/C2/17.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
Every year the most effective Deep learning models, CNN architectures are showcased based on their compatibility and performance on the embedded edge hardware, especially for applications like image classification. These deep learning models necessitate a significant amount of computation and memory, so they can only be used on high-performance computing systems like CPUs or GPUs. However, they often struggle to fulfill portable specifications due to resource, energy, and real-time constraints. Hardware accelerators have recently been designed to provide the computational resources that AI and machine learning tools need. These edge accelerators have high-performance hardware which helps maintain the precision needed to accomplish this mission. Furthermore, this classification dilemma that investigates channel interdependencies using either depth-wise or group-wise convolutional features, has benefited from the inclusion of Bottleneck modules. Because of its increasing use in portable applications, the classic inverted residual block, a well-known architecture technique, has gotten more recognition. This work takes it a step forward by introducing a design method for porting CNNs to lowresource embedded systems, essentially bridging the difference between deep learning models and embedded edge systems. To achieve these goals, we use closer computing strategies to reduce the computer’s computational load and memory usage while retaining excellent deployment efficiency. This thesis work introduces HBONext, a mutated version of Harmonious Bottlenecks (DHbneck) combined with a Flipped version of Inverted Residual (FIR), which outperforms the current HBONet architecture in terms of accuracy and model size miniaturization. Unlike the current definition of inverted residual, this FIR block performs identity mapping and spatial transformation at its higher dimensions. The HBO solution, on the other hand, focuses on two orthogonal dimensions: spatial (H/W) contraction-expansion and later channel (C) expansion-contraction, which are both organized in a bilaterally symmetric manner. HBONext is one of those versions that was designed specifically for embedded and mobile applications. In this research work, we also show how to use NXP Bluebox 2.0 to build a real-time HBONext image classifier. The integration of the model into this hardware has been a big hit owing to the limited model size of 3 MB. The model was trained and validated using CIFAR10 dataset, which performed exceptionally well due to its smaller size and higher accuracy. The validation accuracy of the baseline HBONet architecture is 80.97%, and the model is 22 MB in size. The proposed architecture HBONext variants, on the other hand, gave a higher validation accuracy of 89.70% and a model size of 3.00 MB measured using the number of parameters. The performance metrics of HBONext architecture and its various variants are compared in the following chapters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

(10716561), Sanket Ramesh Joshi. "HBONEXT: AN EFFICIENT DNN FOR LIGHT EDGE EMBEDDED DEVICES". Thesis, 2021.

Buscar texto completo
Resumen
Every year the most effective Deep learning models, CNN architectures are showcased based on their compatibility and performance on the embedded edge hardware, especially for applications like image classification. These deep learning models necessitate a significant amount of computation and memory, so they can only be used on high-performance computing systems like CPUs or GPUs. However, they often struggle to fulfill portable specifications due to resource, energy, and real-time constraints. Hardware accelerators have recently been designed to provide the computational resources that AI and machine learning tools need. These edge accelerators have high-performance hardware which helps maintain the precision needed to accomplish this mission. Furthermore, this classification dilemma that investigates channel interdependencies using either depth-wise or group-wise convolutional features, has benefited from the inclusion of Bottleneck modules. Because of its increasing use in portable applications, the classic inverted residual block, a well-known architecture technique, has gotten more recognition. This work takes it a step forward by introducing a design method for porting CNNs to low-resource embedded systems, essentially bridging the difference between deep learning models and embedded edge systems. To achieve these goals, we use closer computing strategies to reduce the computer's computational load and memory usage while retaining excellent deployment efficiency. This thesis work introduces HBONext, a mutated version of Harmonious Bottlenecks (DHbneck) combined with a Flipped version of Inverted Residual (FIR), which outperforms the current HBONet architecture in terms of accuracy and model size miniaturization. Unlike the current definition of inverted residual, this FIR block performs identity mapping and spatial transformation at its higher dimensions. The HBO solution, on the other hand, focuses on two orthogonal dimensions: spatial (H/W) contraction-expansion and later channel (C) expansion-contraction, which are both organized in a bilaterally symmetric manner. HBONext is one of those versions that was designed specifically for embedded and mobile applications. In this research work, we also show how to use NXP Bluebox 2.0 to build a real-time HBONext image classifier. The integration of the model into this hardware has been a big hit owing to the limited model size of 3 MB. The model was trained and validated using CIFAR10 dataset, which performed exceptionally well due to its smaller size and higher accuracy. The validation accuracy of the baseline HBONet architecture is 80.97%, and the model is 22 MB in size. The proposed architecture HBONext variants, on the other hand, gave a higher validation accuracy of 89.70% and a model size of 3.00 MB measured using the number of parameters. The performance metrics of HBONext architecture and its various variants are compared in the following chapters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

"Study of Knowledge Transfer Techniques For Deep Learning on Edge Devices". Master's thesis, 2018. http://hdl.handle.net/2286/R.I.49325.

Texto completo
Resumen
abstract: With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced in order to be placed on edge devices, but they may loose their capability and may not generalize and perform well compared to large models. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking. The purpose of this work is to provide an extensive study on the performance (both in terms of accuracy and convergence speed) of knowledge transfer, considering different student-teacher architectures, datasets and different techniques for transferring knowledge from teacher to student. A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact. For example, a smaller and shorter network, trained with knowledge transfer on Caltech 101 achieved a significant improvement of 7.36\% in the accuracy and converges 16 times faster compared to the same network trained without knowledge transfer. On the other hand, smaller network which is thinner than the teacher network performed worse with an accuracy drop of 9.48\% on Caltech 101, even with utilization of knowledge transfer.
Dissertation/Thesis
Masters Thesis Computer Science 2018
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

(10911822), Priyank Kalgaonkar. "AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources". Thesis, 2021.

Buscar texto completo
Resumen
Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kalgaonkar, Priyank B. "AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources". Thesis, 2021. http://dx.doi.org/10.7912/C2/64.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
Research work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

廖溢森. "Based on Architecture for Artificial Intelligent AIoT to Evaluate the Perfomance of Edge Computing Scheme—Validation with Case Study". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/fc49kk.

Texto completo
Resumen
博士
大葉大學
電機工程學系
107
In the Internet of Things (IoT) era, the demands for low-latency computing for delay sensitive applications, e.g., location-based augmented reality map applications, real-time smart sensors and real-time navigation using wearables has been growing rapidly. Edge computing is a distributed computing paradigm in which the computation is completely performed on distributed device nodes, which are known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. In this paper, a novel framework model referred as artificial intelligent IoT of edge computing scheme is proposed for challenges to be tackled. In order to develop a decentralized and dynamic software environment, the realization to the vision of edge computing has data processing, resource allocation and latency-sensitive based path selection in edge landscape. Such a dynamic decentralized software environment is implemented and simulated in this article for the edge computing framework. The artificial intelligent IoT of edge computing scheme provides the tools to manage IoT services in the edge landscape by means of real-world test-bed. Furthermore, the proposed framework facilitates the communication between the devices, edge devices consonance, different path selection and dynamic resource allocation in the edge landscape. The proposed techniques are evaluated through extensive experiments that demonstrate the effectiveness, scalability and performance efficiency of the proposed model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

(9777542), Mohamed Anver. "Fuzzy algorithms for image enhancement and edge detection". Thesis, 2004. https://figshare.com/articles/thesis/Fuzzy_algorithms_for_image_enhancement_and_edge_detection/13465622.

Texto completo
Resumen
In this thesis we investigate how artificial intelligent techniques, namely fuzzy logic and genetic/evolutionary algorithms can be used for digital image processing applications. We demonstrate our techniques with respect to two main research areas: removal of heavy impulse noise from corrupted gray scale images and edge detection in digital images. Very often fuzzy logic systems need to deal with large number of rules. This results in two major design issues: (i) How to formulate the fuzzy knowledge base using human expertise and experience? (ii) How to reduce the high computational power and the high processing times required? In this thesis we use evolutionary algorithms (including coevolutionary algorithms) to learn fuzzy knowledge bases to handle the design issue (i) described above, while using multi-layered and hierarchical fuzzy logic systems to reduce the number of rules and hence the computational overhead involved, thereby addressing issue (ii) stated above. In this research, when fuzzy rules are learnt using evolutionary algorithms, each individual in the evolutionary algorithm is appropriately encoded to uniquely represent the fuzzy knowledge base. The fitness of each individual in the evolutionary algorithm is calculated with respect to a predefined reference. In the case of an algorithm learning to enhance a digital image this reference is often associated with the uncorrupted perfect image. Designing multi-layered and hierarchical fuzzy structures involves breaking down the total number of rules, to be fed into multiple fuzzy layers in the system. This process needs careful consideration in forming the appropriate fuzzy layers as well as deciding the parameters to be input to different layers, so that the desired result is obtained with highest precision using the least computation time. Coevolutionary algorithms are powerful tools that can be used in situations where several factors contributing towards the system performance need to be learnt simultaneously. Here multiple populations consisting of candidate solutions are evolved in parallel and the fitness of individuals in each of the population are evaluated by forming a vector of candidate solutions selected from each population. The artificial intelligence techniques briefly described above will be used in this thesis with application to enhancement and edge detection in digital images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Ganin, Iaroslav. "Natural image processing and synthesis using deep learning". Thèse, 2019. http://hdl.handle.net/1866/23437.

Texto completo
Resumen
Nous étudions dans cette thèse comment les réseaux de neurones profonds peuvent être utilisés dans différents domaines de la vision artificielle. La vision artificielle est un domaine interdisciplinaire qui traite de la compréhension d’images et de vidéos numériques. Les problèmes de ce domaine ont traditionnellement été adressés avec des méthodes ad-hoc nécessitant beaucoup de réglages manuels. En effet, ces systèmes de vision artificiels comprenaient jusqu’à récemment une série de modules optimisés indépendamment. Cette approche est très raisonnable dans la mesure où, avec peu de données, elle bénéficient autant que possible des connaissances du chercheur. Mais cette avantage peut se révéler être une limitation si certaines données d’entré n’ont pas été considérées dans la conception de l’algorithme. Avec des volumes et une diversité de données toujours plus grands, ainsi que des capacités de calcul plus rapides et économiques, les réseaux de neurones profonds optimisés d’un bout à l’autre sont devenus une alternative attrayante. Nous démontrons leur avantage avec une série d’articles de recherche, chacun d’entre eux trouvant une solution à base de réseaux de neurones profonds à un problème d’analyse ou de synthèse visuelle particulier. Dans le premier article, nous considérons un problème de vision classique: la détection de bords et de contours. Nous partons de l’approche classique et la rendons plus ‘neurale’ en combinant deux étapes, la détection et la description de motifs visuels, en un seul réseau convolutionnel. Cette méthode, qui peut ainsi s’adapter à de nouveaux ensembles de données, s’avère être au moins aussi précis que les méthodes conventionnelles quand il s’agit de domaines qui leur sont favorables, tout en étant beaucoup plus robuste dans des domaines plus générales. Dans le deuxième article, nous construisons une nouvelle architecture pour la manipulation d’images qui utilise l’idée que la majorité des pixels produits peuvent d’être copiés de l’image d’entrée. Cette technique bénéficie de plusieurs avantages majeurs par rapport à l’approche conventionnelle en apprentissage profond. En effet, elle conserve les détails de l’image d’origine, n’introduit pas d’aberrations grâce à la capacité limitée du réseau sous-jacent et simplifie l’apprentissage. Nous démontrons l’efficacité de cette architecture dans le cadre d’une tâche de correction du regard, où notre système produit d’excellents résultats. Dans le troisième article, nous nous éclipsons de la vision artificielle pour étudier le problème plus générale de l’adaptation à de nouveaux domaines. Nous développons un nouvel algorithme d’apprentissage, qui assure l’adaptation avec un objectif auxiliaire à la tâche principale. Nous cherchons ainsi à extraire des motifs qui permettent d’accomplir la tâche mais qui ne permettent pas à un réseau dédié de reconnaître le domaine. Ce réseau est optimisé de manière simultané avec les motifs en question, et a pour tâche de reconnaître le domaine de provenance des motifs. Cette technique est simple à implémenter, et conduit pourtant à l’état de l’art sur toutes les tâches de référence. Enfin, le quatrième article présente un nouveau type de modèle génératif d’images. À l’opposé des approches conventionnels à base de réseaux de neurones convolutionnels, notre système baptisé SPIRAL décrit les images en termes de programmes bas-niveau qui sont exécutés par un logiciel de graphisme ordinaire. Entre autres, ceci permet à l’algorithme de ne pas s’attarder sur les détails de l’image, et de se concentrer plutôt sur sa structure globale. L’espace latent de notre modèle est, par construction, interprétable et permet de manipuler des images de façon prévisible. Nous montrons la capacité et l’agilité de cette approche sur plusieurs bases de données de référence.
In the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía