Dissertationen zum Thema „Edge artificial intelligence“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-29 Dissertationen für die Forschung zum Thema "Edge artificial intelligence" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Antonini, Mattia. „From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.
Der volle Inhalt der QuelleAntonini, Mattia. „From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications“. Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.
Der volle Inhalt der QuelleAbernot, Madeleine. „Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning“. Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.
Der volle Inhalt der QuelleIn the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Hasanaj, Enis, Albert Aveler und William Söder. „Cooperative edge deepfake detection“. Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.
Der volle Inhalt der QuelleWoldeMichael, Helina Getachew. „Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and Performance“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17267.
Der volle Inhalt der QuellePELUSO, VALENTINO. „Optimization Tools for ConvNets on the Edge“. Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2845792.
Der volle Inhalt der QuelleLaroui, Mohammed. „Distributed edge computing for enhanced IoT devices and new generation network efficiency“. Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7078.
Der volle Inhalt der QuelleTraditional cloud infrastructure will face a series of challenges due to the centralization of computing, storage, and networking in a small number of data centers, and the long-distance between connected devices and remote data centers. To meet this challenge, edge computing seems to be a promising possibility that provides resources closer to IoT devices. In the cloud computing model, compute resources and services are often centralized in large data centers that end-users access from the network. This model has an important economic value and more efficient resource-sharing capabilities. New forms of end-user experience such as the Internet of Things require computing resources near to the end-user devices at the network edge. To meet this need, edge computing relies on a model in which computing resources are distributed to the edge of a network as needed, while decentralizing the data processing from the cloud to the edge as possible. Thus, it is possible to quickly have actionable information based on data that varies over time. In this thesis, we propose novel optimization models to optimize the resource utilization at the network edge for two edge computing research directions, service offloading and vehicular edge computing. We study different use cases in each research direction. For the optimal solutions, First, for service offloading we propose optimal algorithms for services placement at the network edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) by taking into account the computing resources constraints. Moreover, for vehicular edge computing, we propose exact models related to maximizing the coverage of vehicles by both Taxis and Unmanned Aerial Vehicle (UAV) for online video streaming applications. In addition, we propose optimal edge-autopilot VNFs offloading at the network edge for autonomous driving. The evaluation results show the efficiency of the proposed algorithms in small-scale networks in terms of time, cost, and resource utilization. To deal with dense networks with a high number of devices and scalability issues, we propose large-scale algorithms that support a huge amount of devices, data, and users requests. Heuristic algorithms are proposed for SFC orchestration, maximum coverage of mobile edge servers (vehicles). Moreover, The artificial intelligence algorithms (machine learning, deep learning, and deep reinforcement learning) are used for 5G VNF slices placement, edge-autopilot VNF placement, and autonomous UAV navigation. The numerical results give good results compared with exact algorithms with high efficiency in terms of time
MAZZIA, VITTORIO. „Machine Learning Algorithms and their Embedded Implementation for Service Robotics Applications“. Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2968456.
Der volle Inhalt der QuelleLabouré, Iooss Marie-José. „Faisabilité d'une carte électronique d'opérateurs de seuillage : déformation d'objets plans lors de transformations de type morphologique“. Saint-Etienne, 1987. http://www.theses.fr/1987STET4014.
Der volle Inhalt der QuelleBusacca, Fabio Antonino. „AI for Resource Allocation and Resource Allocation for AI: a two-fold paradigm at the network edge“. Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.
Der volle Inhalt der QuelleHachouf, Fella. „Télédétection des contours linéaires“. Rouen, 1988. http://www.theses.fr/1988ROUES027.
Der volle Inhalt der QuelleDelort, François. „Systeme d'approximation polygonale des contours pour application en vision industrielle“. Clermont-Ferrand 2, 1988. http://www.theses.fr/1988CLF2D209.
Der volle Inhalt der QuelleFERDINANDI, Marco. „A Learning Sensors Platform for Health and Safety Applications“. Doctoral thesis, Università degli studi di Cassino, 2020. http://hdl.handle.net/11580/74754.
Der volle Inhalt der QuelleLanzarone, Lorenzo Biagio. „Manutenzione predittiva di macchinari industriali tramite tecniche di intelligenza artificiale: una valutazione sperimentale“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22853/.
Der volle Inhalt der QuelleLongo, Eugenio. „AI e IoT: contesto e stato dell’arte“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022.
Den vollen Inhalt der Quelle findenDesai, Ujjaval Y., Marcelo M. Mizuki, Ichiro Masaki und Berthold K. P. Horn. „Edge and Mean Based Image Compression“. 1996. http://hdl.handle.net/1721.1/5943.
Der volle Inhalt der QuelleSo, Austin G., und 蘇偉賢. „A Hierarchical Approach for Efficient Workload Allocation for Edge Artificial Intelligence“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5zcfg6.
Der volle Inhalt der Quelle國立清華大學
資訊工程學系所
107
A critical constraint in Edge Artificial Intelligence (AI) is its limited computing power. Due to this, reliance on edge AI would result in an inevitable accuracy trade-off. One way to increase the overall accuracy is to introduce a workload allocation scheme that would assign input data requiring complex computations to a server AI while retaining simple ones at the edge AI. In order to achieve this, we utilize an authentic operation (AO) which assesses prediction confidence of the edge AI. We based our research on a previous work which uses fine-grained pair-wise thresholding. In this work, we proposed a coarse-grained cluster-wise hierarchical thresholding. Moreover, mean squared error (MSE) is used to regularize the edge AI’s prediction based on the obtained threshold data. We further modify the existing AO block by adding a second level criterion which serves as a validation layer with the aim of further reducing the transmission count. Our methodology minimizes the threshold values by 90% for a 10 class dataset and reduces data transmission by 15.20% while retaining overall accuracy.
Wong, Jun Hua. „Efficient Edge Intelligence in the Era of Big Data“. Thesis, 2021. http://hdl.handle.net/1805/26385.
Der volle Inhalt der QuelleSmart wearables, known as emerging paradigms for vital big data capturing, have been attracting intensive attentions. However, one crucial problem is their power-hungriness, i.e., the continuous data streaming consumes energy dramatically and requires devices to be frequently charged. Targeting this obstacle, we propose to investigate the biodynamic patterns in the data and design a data-driven approach for intelligent data compression. We leverage Deep Learning (DL), more specifically, Convolutional Autoencoder (CAE), to learn a sparse representation of the vital big data. The minimized energy need, even taking into consideration the CAE-induced overhead, is tremendously lower than the original energy need. Further, compared with state-of-the-art wavelet compression-based method, our method can compress the data with a dramatically lower error for a similar energy budget. Our experiments and the validated approach are expected to boost the energy efficiency of wearables, and thus greatly advance ubiquitous big data applications in era of smart health. In recent years, there has also been a growing interest in edge intelligence for emerging instantaneous big data inference. However, the inference algorithms, especially deep learning, usually require heavy computation requirements, thereby greatly limiting their deployment on the edge. We take special interest in the smart health wearable big data mining and inference. Targeting the deep learning’s high computational complexity and large memory and energy requirements, new approaches are urged to make the deep learning algorithms ultra-efficient for wearable big data analysis. We propose to leverage knowledge distillation to achieve an ultra-efficient edge-deployable deep learning model. More specifically, through transferring the knowledge from a teacher model to the on-edge student model, the soft target distribution of the teacher model can be effectively learned by the student model. Besides, we propose to further introduce adversarial robustness to the student model, by stimulating the student model to correctly identify inputs that have adversarial perturbation. Experiments demonstrate that the knowledge distillation student model has comparable performance to the heavy teacher model but owns a substantially smaller model size. With adversarial learning, the student model has effectively preserved its robustness. In such a way, we have demonstrated the framework with knowledge distillation and adversarial learning can, not only advance ultra-efficient edge inference, but also preserve the robustness facing the perturbed input.
„Monocular Depth Estimation with Edge-Based Constraints and Active Learning“. Master's thesis, 2019. http://hdl.handle.net/2286/R.I.54881.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Computer Engineering 2019
(11013474), Jun Hua Wong. „Efficient Edge Intelligence In the Era of Big Data“. Thesis, 2021.
Den vollen Inhalt der Quelle findenKumar, Ranjan. „Fault Diagnosis of Inclined Edge Cracked Cantilever Beam Using Vibrational Analysis and Artificial Intelligence Techniques“. Thesis, 2014. http://ethesis.nitrkl.ac.in/6486/1/212ME1273-5.pdf.
Der volle Inhalt der QuelleJoshi, Sanket Ramesh. „HBONext: An Efficient Dnn for Light Edge Embedded Devices“. Thesis, 2021. http://dx.doi.org/10.7912/C2/17.
Der volle Inhalt der QuelleEvery year the most effective Deep learning models, CNN architectures are showcased based on their compatibility and performance on the embedded edge hardware, especially for applications like image classification. These deep learning models necessitate a significant amount of computation and memory, so they can only be used on high-performance computing systems like CPUs or GPUs. However, they often struggle to fulfill portable specifications due to resource, energy, and real-time constraints. Hardware accelerators have recently been designed to provide the computational resources that AI and machine learning tools need. These edge accelerators have high-performance hardware which helps maintain the precision needed to accomplish this mission. Furthermore, this classification dilemma that investigates channel interdependencies using either depth-wise or group-wise convolutional features, has benefited from the inclusion of Bottleneck modules. Because of its increasing use in portable applications, the classic inverted residual block, a well-known architecture technique, has gotten more recognition. This work takes it a step forward by introducing a design method for porting CNNs to lowresource embedded systems, essentially bridging the difference between deep learning models and embedded edge systems. To achieve these goals, we use closer computing strategies to reduce the computer’s computational load and memory usage while retaining excellent deployment efficiency. This thesis work introduces HBONext, a mutated version of Harmonious Bottlenecks (DHbneck) combined with a Flipped version of Inverted Residual (FIR), which outperforms the current HBONet architecture in terms of accuracy and model size miniaturization. Unlike the current definition of inverted residual, this FIR block performs identity mapping and spatial transformation at its higher dimensions. The HBO solution, on the other hand, focuses on two orthogonal dimensions: spatial (H/W) contraction-expansion and later channel (C) expansion-contraction, which are both organized in a bilaterally symmetric manner. HBONext is one of those versions that was designed specifically for embedded and mobile applications. In this research work, we also show how to use NXP Bluebox 2.0 to build a real-time HBONext image classifier. The integration of the model into this hardware has been a big hit owing to the limited model size of 3 MB. The model was trained and validated using CIFAR10 dataset, which performed exceptionally well due to its smaller size and higher accuracy. The validation accuracy of the baseline HBONet architecture is 80.97%, and the model is 22 MB in size. The proposed architecture HBONext variants, on the other hand, gave a higher validation accuracy of 89.70% and a model size of 3.00 MB measured using the number of parameters. The performance metrics of HBONext architecture and its various variants are compared in the following chapters.
(10716561), Sanket Ramesh Joshi. „HBONEXT: AN EFFICIENT DNN FOR LIGHT EDGE EMBEDDED DEVICES“. Thesis, 2021.
Den vollen Inhalt der Quelle finden„Study of Knowledge Transfer Techniques For Deep Learning on Edge Devices“. Master's thesis, 2018. http://hdl.handle.net/2286/R.I.49325.
Der volle Inhalt der QuelleDissertation/Thesis
Masters Thesis Computer Science 2018
(10911822), Priyank Kalgaonkar. „AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources“. Thesis, 2021.
Den vollen Inhalt der Quelle findenKalgaonkar, Priyank B. „AI on the Edge with CondenseNeXt: An Efficient Deep Neural Network for Devices with Constrained Computational Resources“. Thesis, 2021. http://dx.doi.org/10.7912/C2/64.
Der volle Inhalt der QuelleResearch work presented within this thesis propose a neoteric variant of deep convolutional neural network architecture, CondenseNeXt, designed specifically for ARM-based embedded computing platforms with constrained computational resources. CondenseNeXt is an improved version of CondenseNet, the baseline architecture whose roots can be traced back to ResNet. CondeseNeXt replaces group convolutions in CondenseNet with depthwise separable convolutions and introduces group-wise pruning, a model compression technique, to prune (remove) redundant and insignificant elements that either are irrelevant or do not affect performance of the network upon disposition. Cardinality, a new dimension to the existing spatial dimensions, and class-balanced focal loss function, a weighting factor inversely proportional to the number of samples, has been incorporated in order to relieve the harsh effects of pruning, into the design of CondenseNeXt’s algorithm. Furthermore, extensive analyses of this novel CNN architecture was performed on three benchmarking image datasets: CIFAR-10, CIFAR-100 and ImageNet by deploying the trained weight on to an ARM-based embedded computing platform: NXP BlueBox 2.0, for real-time image classification. The outputs are observed in real-time in RTMaps Remote Studio’s console to verify the correctness of classes being predicted. CondenseNeXt achieves state-of-the-art image classification performance on three benchmark datasets including CIFAR-10 (4.79% top-1 error), CIFAR-100 (21.98% top-1 error) and ImageNet (7.91% single model, single crop top-5 error), and up to 59.98% reduction in forward FLOPs compared to CondenseNet. CondenseNeXt can also achieve a final trained model size of 2.9 MB, however at the cost of 2.26% in accuracy loss. Thus, performing image classification on ARM-Based computing platforms without requiring a CUDA enabled GPU support, with outstanding efficiency.
廖溢森. „Based on Architecture for Artificial Intelligent AIoT to Evaluate the Perfomance of Edge Computing Scheme—Validation with Case Study“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/fc49kk.
Der volle Inhalt der Quelle大葉大學
電機工程學系
107
In the Internet of Things (IoT) era, the demands for low-latency computing for delay sensitive applications, e.g., location-based augmented reality map applications, real-time smart sensors and real-time navigation using wearables has been growing rapidly. Edge computing is a distributed computing paradigm in which the computation is completely performed on distributed device nodes, which are known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. In this paper, a novel framework model referred as artificial intelligent IoT of edge computing scheme is proposed for challenges to be tackled. In order to develop a decentralized and dynamic software environment, the realization to the vision of edge computing has data processing, resource allocation and latency-sensitive based path selection in edge landscape. Such a dynamic decentralized software environment is implemented and simulated in this article for the edge computing framework. The artificial intelligent IoT of edge computing scheme provides the tools to manage IoT services in the edge landscape by means of real-world test-bed. Furthermore, the proposed framework facilitates the communication between the devices, edge devices consonance, different path selection and dynamic resource allocation in the edge landscape. The proposed techniques are evaluated through extensive experiments that demonstrate the effectiveness, scalability and performance efficiency of the proposed model.
(9777542), Mohamed Anver. „Fuzzy algorithms for image enhancement and edge detection“. Thesis, 2004. https://figshare.com/articles/thesis/Fuzzy_algorithms_for_image_enhancement_and_edge_detection/13465622.
Der volle Inhalt der QuelleGanin, Iaroslav. „Natural image processing and synthesis using deep learning“. Thèse, 2019. http://hdl.handle.net/1866/23437.
Der volle Inhalt der QuelleIn the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility.