Tesis sobre el tema "Embedded AI"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Embedded AI.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 16 mejores tesis para su investigación sobre el tema "Embedded AI".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Chollet, Nicolas. "Embedded-AI-enabled semantic IoT platform for agroecology". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG078.

Texto completo
Resumen
L'agriculture moderne nécessite une profonde transformation pour répondre aux défis du développement durable tout en nourrissant qualitativement et quantitativement la population mondiale croissante. Dans cette optique, les agriculteurs adoptent le "Smart Farming" ou agriculture intelligente. C'est une méthode agricole qui utilise la technologie pour améliorer l'efficacité, la productivité et la durabilité de la production agricole. Elle englobe l'usage de capteurs, l'internet des objets (IoT), l'Intelligence Artificielle (IA), l'analyse de données, la robotique et divers autres outils numériques optimisant des aspects tels que la gestion des sols, l'irrigation, la lutte antiparasitaire ou encore la gestion de l'élevage. L'objectif est d'augmenter la production tout en réduisant la consommation de ressources, minimisant les déchets et améliorant la qualité des produits. Toutefois, malgré ses avantages et son déploiement réussi dans divers projets, l'agriculture intelligente rencontre des limites notamment dans le cadre de l'IoT. Premièrement, les plateformes doivent être capables de percevoir des données dans l'environnement, de les interpréter et de prendre des décisions pour aider à la gestion des fermes. Le volume, la variété et la vélocité de ces données, conjuguées à la grande diversité d'objets ainsi qu'à l'avènement de l'IA embarquée dans les capteurs, rendent difficile les communications sur les réseaux agricoles sans fils. Deuxièmement, les recherches tendent à se focaliser sur des projets répondant aux problématiques de l'agriculture conventionnelle non durable et les projets concernant les petites exploitations axées sur l'agroécologie sont rares. Dans ce contexte, cette thèse explore la création d'une plateforme IoT composée d'un réseau de capteurs intelligents sémantiques, visant à guider les agriculteurs dans la transition et la gestion de leur ferme en agriculture durable tout en minimisant l'intervention humaine
Modern agriculture requires a profound transformation to address the challenges of sustainable development while qualitatively and quantitatively feeding the growing global population. In this light, farmers are adopting "Smart Farming" also called precision agriculture. It is an agricultural method that leverages technology to enhance the efficiency, productivity, and sustainability of agricultural production. This approach encompasses the use of sensors, the Internet of Things (IoT), Artificial Intelligence (AI), data analysis, robotics, and various other digital tools optimizing aspects such as soil management, irrigation, pest control, and livestock management. The goal is to increase production while reducing resource consumption, minimizing waste, and improving product quality. However, despite its benefits and successful deployment in various projects, smart agriculture encounters limitations, especially within the context of IoT. Firstly, platforms must be capable of perceiving data in the environment, interpreting it, and making decisions to assist in farm management. The volume, variety, and velocity of those data, combined with a wide diversity of objects and the advent of AI embedded in sensors, make communication challenging on wireless agricultural networks. Secondly, research tends to focus on projects addressing the issues of non-sustainable conventional agriculture, and projects related to small-scale farms focused on agroecology are rare. In this context, this thesis explores the creation of an IoT platform comprised of a network of semantic smart sensors, aiming to guide farmers in transitioning and managing their farm sustainably while minimizing human intervention
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Biswas, Avishek Ph D. Massachusetts Institute of Technology. "Energy-efficient smart embedded memory design for IoT and AI". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117831.

Texto completo
Resumen
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted student-submitted PDF version of thesis.
Includes bibliographical references (pages 137-146).
Static Random Access Memory (SRAM) continues to be the embedded memory of choice for modern System-on-a-Chip (SoC) applications, thanks to aggressive CMOS scaling, which keeps on providing higher storage density per unit silicon area. As memory sizes continue to grow, increased bit-cell variation limits the supply voltage (Vdd) scaling of the memory. Furthermore, larger memories lead to data transfer over longer distances on chip, which leads to increased power dissipation. In the era of the Internet-of-Things (IoT) and Artificial Intelligence (AI), memory bandwidth and power consumption are often the main bottlenecks for SoC solutions. Therefore, in addition to Vdd scaling, this thesis also explores leveraging data properties and application-specfic features to design more tailored and "smarter" memories. First, a 128Kb 6T bit-cell based SRAM is designed in a modern 28nm FDSOI process. Dynamic forward body-biasing (DFBB) is used to improve the write operation, and reduce the minimum Vdd to 0.34V, even with 6T bit-cells. A new layout technique is proposed for the array, to reduce the energy overhead of DFBB and decrease the unwanted bit-line switching for un-selected columns in the SRAM, providing dynamic energy savings. The 6T SRAM also uses data prediction in its read path, to provide upto 36% further dynamic energy savings, with correct predictions. The second part of this thesis, explores in-memory computation for reducing data movement and increasing memory bandwidth, in data-intensive machine learning applications. A 16Kb SRAM with embedded dot-product computation capability, is designed for binary-weight neural networks. Highly parallel analog processing in- side the memory array, provided better energy-efficiency than conventional digital implementations. With our variation-tolerant architecture and support of multi-bit resolutions for inputs/outputs, > 98% classication accuracy was demonstrated on the MNIST dataset, for the handwritten digit recognition application. In the last part of the thesis, variation-tolerant read-sensing architectures are explored for future non-volatile resistive memories, e.g. STT-RAM.
by Avishek Biswas.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Bartoli, Giacomo. "Edge AI: Deep Learning techniques for Computer Vision applied to embedded systems". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16820/.

Texto completo
Resumen
In the last decade, Machine Learning techniques have been used in different fields, ranging from finance to healthcare and even marketing. Amongst all these techniques, the ones adopting a Deep Learning approach were revealed to outperform humans in tasks such as object detection, image classification and speech recognition. This thesis introduces the concept of Edge AI: that is the possibility to build learning models capable of making inference locally, without any dependence on expensive servers or cloud services. A first case study we consider is based on the Google AIY Vision Kit, an intelligent camera equipped with a graphic board to optimize Computer Vision algorithms. Then, we test the performances of CORe50, a dataset for continuous object recognition, on embedded systems. The techniques developed in these chapters will be finally used to solve a challenge within the Audi Autonomous Driving Cup 2018, where a mobile car equipped with a camera, sensors and a graphic board must recognize pedestrians and stop before hitting them.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Royles, Christopher Andrew. "Intelligent presentation and tailoring of online legal information". Thesis, University of Liverpool, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343616.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

MAZZIA, VITTORIO. "Machine Learning Algorithms and their Embedded Implementation for Service Robotics Applications". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2968456.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

MOCERINO, LUCA. "Hardware-Aware Cross-Layer Optimizations of Deep Neural Networks for Embedded Systems". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972558.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fredriksson, Tomas y Rickard Svensson. "Analysis of machine learning for human motion pattern recognition on embedded devices". Thesis, KTH, Mekatronik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-246087.

Texto completo
Resumen
With an increased amount of connected devices and the recent surge of artificial intelligence, the two technologies need more attention to fully bloom as a useful tool for creating new and exciting products. As machine learning traditionally is implemented on computers and online servers this thesis explores the possibility to extend machine learning to an embedded environment. This evaluation of existing machine learning in embedded systems with limited processing capa-bilities has been carried out in the specific context of an application involving classification of basic human movements. Previous research and implementations indicate that it is possible with some limitations, this thesis aims to answer which hardware limitation is affecting clas-sification and what classification accuracy the system can reach on an embedded device. The tests included human motion data from an existing dataset and included four different machine learning algorithms on three devices. Support Vector Machine (SVM) are found to be performing best com-pared to CART, Random Forest and AdaBoost. It reached a classification accuracy of 84,69% between six different included motions with a clas-sification time of 16,88 ms per classification on a Cortex M4 processor. This is the same classification accuracy as the one obtained on the host computer with more computational capabilities. Other hardware and machine learning algorithm combinations had a slight decrease in clas-sification accuracy and an increase in classification time. Conclusions could be drawn that memory on the embedded device affect which al-gorithms could be run and the complexity of data that can be extracted in form of features. Processing speed is mostly affecting classification time. Additionally the performance of the machine learning system is connected to the type of data that is to be observed, which means that the performance of different setups differ depending on the use case.
Antalet uppkopplade enheter ökar och det senaste uppsvinget av ar-tificiell intelligens driver forskningen framåt till att kombinera de två teknologierna för att både förbättra existerande produkter och utveckla nya. Maskininlärning är traditionellt sett implementerat på kraftfulla system så därför undersöker den här masteruppsatsen potentialen i att utvidga maskininlärning till att köras på inbyggda system. Den här undersökningen av existerande maskinlärningsalgoritmer, implemen-terade på begränsad hårdvara, har utförts med fokus på att klassificera grundläggande mänskliga rörelser. Tidigare forskning och implemen-tation visar på att det ska vara möjligt med vissa begränsningar. Den här uppsatsen vill svara på vilken hårvarubegränsning som påverkar klassificering mest samt vilken klassificeringsgrad systemet kan nå på den begränsande hårdvaran. Testerna inkluderade mänsklig rörelsedata från ett existerande dataset och inkluderade fyra olika maskininlärningsalgoritmer på tre olika system. SVM presterade bäst i jämförelse med CART, Random Forest och AdaBoost. Den nådde en klassifikationsgrad på 84,69% på de sex inkluderade rörelsetyperna med en klassifikationstid på 16,88 ms per klassificering på en Cortex M processor. Detta är samma klassifikations-grad som en vanlig persondator når med betydligt mer beräknings-resurserresurser. Andra hårdvaru- och algoritm-kombinationer visar en liten minskning i klassificeringsgrad och ökning i klassificeringstid. Slutsatser kan dras att minnet på det inbyggda systemet påverkar vilka algoritmer som kunde köras samt komplexiteten i datan som kunde extraheras i form av attribut (features). Processeringshastighet påverkar mest klassificeringstid. Slutligen är prestandan för maskininlärningsy-stemet bunden till typen av data som ska klassificeras, vilket betyder att olika uppsättningar av algoritmer och hårdvara påverkar prestandan olika beroende på användningsområde.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Hasanzadeh, Mujtaba y Alexandra Hengl. "Real-Time Pupillary Analysis By An Intelligent Embedded System". Thesis, Mälardalens högskola, Inbyggda system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44352.

Texto completo
Resumen
With no online pupillary analysis methods today, both the medical and the research fields are left to carry out a lengthy, manual and often faulty examination. A real-time, intelligent, embedded systems solution to pupillary analysis would help reduce faulty diagnosis, speed-up the analysis procedure by eliminating the human expert operator and in general, provide a versatile and highly adaptable research tool. Therefore, this thesis has sought to investigate, develop and test possible system designs for pupillary analysis, with the aim for caffeine detection. A pair of LED manipulator glasses have been designed to standardize the illumination method across testing. A data analysis method of the raw pupillary data has been established offline and then adapted to a real-time platform. ANN was chosen as classification algorithm. The accuracy of the ANN from the offline analysis was 94% while for the online classification the obtained accuracy was 17%. A realtime data communication and synchronization method has been developed. The resulting system showed reliable and fast execution times. Data analysis and classification took no longer than 2ms, faulty data detection showed consistent results. Data communication suffered no message loss. In conclusion, it is reported that a real-time, intelligent, embedded solution is feasible for pupillary analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

TUVERI, GIUSEPPE. "Integrated support for Adaptivity and Fault-tolerance in MPSoCs". Doctoral thesis, Università degli Studi di Cagliari, 2013. http://hdl.handle.net/11584/266097.

Texto completo
Resumen
The technology improvement and the adoption of more and more complex applications in consumer electronics are forcing a rapid increase in the complexity of multiprocessor systems on chip (MPSoCs). Following this trend, MPSoCs are becoming increasingly dynamic and adaptive, for several reasons. One of these is that applications are getting intrinsically dynamic. Another reason is that the workload on emerging MPSoCs cannot be predicted because modern systems are open to new incoming applications at run-time. A third reason which calls for adaptivity is the decreasing component reliability associated with technology scaling. Components below the 32-nm node are more inclined to temporal or even permanent faults. In case of a malfunctioning system component, the rest of the system is supposed to take over its tasks. Thus, the system adaptivity goal shall influence several de- sign decisions, that have been listed below: 1) The applications should be specified such that system adaptivity can be easily supported. To this end, we consider Polyhedral Process Networks (PPNs) as model of computation to specify applications. PPNs are composed by concurrent and autonomous processes that communicate between each other using bounded FIFO channels. Moreover, in PPNs the control is completely distributed, as well as the memories. This represents a good match with the emerging MPSoC architectures, in which processing elements and memories are usually distributed. Most importantly, the simple operational semantics of PPNs allows for an easy adoption of system adaptivity mechanisms. 2) The hardware platform should guarantee the flexibility that adaptivity mechanisms require. Networks-on-Chip (NoCs) are emerging communication infrastructures for MPSoCs that, among many other advantages, allow for system adaptivity. This is because NoCs are generic, since the same platformcan be used to run different applications, or to run the same application with different mapping of processes. However, there is a mismatch between the generic structure of the NoCs and the semantics of the PPN model. Therefore, in this thesis we investigate and propose several communication approaches to overcome this mismatch. 3) The system must be able to change the process mapping at run-time, using process migration. To this end, a process migration mechanism has been proposed and evaluated. This mechanism takes into account specific requirements of the embedded domain such as predictability and efficiency. To face the problem of graceful degradation of the system, we enriched the MADNESS NoC platform by adding fault tolerance support at both software and hardware level. The proposed process migration mechanism can be exploited to cope with permanent faults by migrating the processes running on the faulty processing element. A fast heuristic is used to determine the new mapping of the processes to tiles. The experimental results prove that the overhead in terms of execution time, due to the execution time of the remapping heuristic, together with the actual process migration, is almost negligible compared to the execution time of the whole application. This means that the proposed approach allows the system to change its performance metrics and to react to faults without a substantial impact on the user experience.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Antonini, Mattia. "From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texto completo
Resumen
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Antonini, Mattia. "From Edge Computing to Edge Intelligence: exploring novel design approaches to intelligent IoT applications". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texto completo
Resumen
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Shrivastwa, Ritu Ranjan. "Enhancements in Embedded Systems Security using Machine Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT051.

Texto completo
Resumen
La liste des appareils connectés (ou IoT) s’allonge avec le temps, de même que leur vulnérabilité face aux attaques ciblées provenant du réseau ou de l’accès physique, communément appelées attaques Cyber Physique (CPS). Alors que les capteurs visant à détecter les attaques, et les techniques d’obscurcissement existent pour contrecarrer et améliorer la sécurité, il est possible de contourner ces contre-mesures avec des équipements et des méthodologies d’attaque sophistiqués, comme le montre la littérature récente. De plus, la conception des systèmes intégrés est soumise aux contraintes de complexité et évolutivité, ce qui rend difficile l’adjonction d’un mécanisme de détection complexe contre les attaques CPS. Une solution pour améliorer la sécurité est d’utiliser l’Intelligence Artificielle (IA) (au niveau logiciel et matériel) pour surveiller le comportement des données en interne à partir de divers capteurs. L’approche IA permettrait d’analyser le comportement général du système à l’aide des capteurs , afin de détecter toute activité aberrante, et de proposer une réaction appropriée en cas d’attaque. L’intelligence artificielle dans le domaine de la sécurité matérielle n’est pas encore très utilisée en raison du comportement probabiliste. Ce travail vise à établir une preuve de concept visant à montrer l’efficacité de l’IA en matière de sécurité.Une partie de l’étude consiste à comparer et choisir différentes techniques d’apprentissage automatique (Machine Learning ML) et leurs cas d’utilisation dans la sécurité matérielle. Plusieurs études de cas seront considérées pour analyser finement l’intérêt et de l’ IA sur les systèmes intégrés. Les applications seront notamment l’utilisation des PUF (Physically Unclonable Function), la fusion de capteurs, les attaques par canal caché (SCA), la détection de chevaux de Troie, l’intégrité du flux de contrôle, etc
The list of connected devices (or IoT) is growing longer with time and so is the intense vulnerability to security of the devices against targeted attacks originating from network or physical penetration, popularly known as Cyber Physical Security (CPS) attacks. While security sensors and obfuscation techniques exist to counteract and enhance security, it is possible to fool these classical security countermeasures with sophisticated attack equipment and methodologies as shown in recent literature. Additionally, end node embedded systems design is bound by area and is required to be scalable, thus, making it difficult to adjoin complex sensing mechanism against cyberphysical attacks. The solution may lie in Artificial Intelligence (AI) security core (soft or hard) to monitor data behaviour internally from various components. Additionally the AI core can monitor the overall device behaviour, including attached sensors, to detect any outlier activity and provide a smart sensing approach to attacks. AI in hardware security domain is still not widely acceptable due to the probabilistic behaviour of the advanced deep learning techniques, there have been works showing practical implementations for the same. This work is targeted to establish a proof of concept and build trust of AI in security by detailed analysis of different Machine Learning (ML) techniques and their use cases in hardware security followed by a series of case studies to provide practical framework and guidelines to use AI in various embedded security fronts. Applications can be in PUFpredictability assessment, sensor fusion, Side Channel Attacks (SCA), Hardware Trojan detection, Control flow integrity, Adversarial AI, etc
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ringenson, Josefin. "Efficiency of CNN on Heterogeneous Processing Devices". Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-155034.

Texto completo
Resumen
In the development of advanced driver assistance systems, computer vision problemsneed to be optimized to run efficiently on embedded platforms. Convolutional neural network(CNN) accelerators have proven to be very efficient for embedded camera platforms,such as the ones used for automotive vision systems. Therefore, the focus of this thesisis to evaluate the efficiency of a CNN on a future embedded heterogeneous processingdevice. The memory size in an embedded system is often very limited, and it is necessary todivide the input into multiple tiles. In addition, there are power and speed constraintsthat needs to be met to be able to use a computer vision system in a car. To increaseefficiency and optimize the memory usage, different methods for CNN layer fusion areproposed and evaluated for a variety of tile sizes. Several different layer fusion methods and input tile sizes are chosen as optimal solutions,depending on the depth of the layers in the CNN. The solutions investigated inthe thesis are most efficient for deep CNN layers, where the number of channels is high.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Habib, Yassine. "Monocular SLAM densification for 3D mapping and autonomous drone navigation". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0390.

Texto completo
Resumen
Les drones aériens sont essentiels dans les missions de recherche et de sauvetage car ils permettent une reconnaissance rapide de la zone de la mission, tel qu’un bâtiment effondré. La cartographie 3D dense et métrique en temps réel est cruciale pour capturer la structure de l’environnement et permettre une navigation autonome. L’approche privilégiée pour cette tâche consiste à utiliser du SLAM (Simultaneous Localization and Mapping) à partir d’une caméra monoculaire synchronisée avec une centrale inertielle (IMU). Les algorithmes à l’état de l’art maximisent l’efficacité en triangulant un nombre minimum de points, construisant ainsi un nuage de points 3D épars. Quelques travaux traitent de la densification du SLAM monoculaire, généralement en utilisant des réseaux neuronaux profonds pour prédire une carte de profondeur dense à partir d’une seule image. La plupart ne sont pas métriques ou sont trop complexes pour être utilisés en embarqué. Dans cette thèse, nous identifions une méthode de SLAM monoculaire à l’état de l’art et l’évaluons dans des conditions difficiles pour les drones. Nous présentons une architecture fonctionnelle pour densifier le SLAM monoculaire en appliquant la prédiction de profondeur monoculaire pour construire une carte dense et métrique en voxels 3D.L’utilisation de voxels permet une construction et une maintenance efficaces de la carte par projection de rayons, et permet la fusion volumétrique multi-vues. Enfin, nous proposons une procédure de récupération d’échelle qui utilise les estimations de profondeur éparses et métriques du SLAM pour affiner les cartes de profondeur denses prédites. Notre approche a été évaluée sur des benchmarks conventionnels et montre des résultats prometteurs pour des applications pratiques
Aerial drones are essential in search and rescue missions as they provide fast reconnaissance of the mission area, such as a collapsed building. Creating a dense and metric 3D map in real-time is crucial to capture the structure of the environment and enable autonomous navigation. The recommended approach for this task is to use Simultaneous Localization and Mapping (SLAM) from a monocular camera synchronized with an Inertial Measurement Unit (IMU). Current state-of-the-art algorithms maximize efficiency by triangulating a minimum number of points, resulting in a sparse 3D point cloud. Few works address monocular SLAM densification, typically by using deep neural networks to predict a dense depth map from a single image. Most are not metric or are too complex for use in embedded applications. In this thesis, we identify and evaluate a state of-the-art monocular SLAM baseline under challenging drone conditions. We present a practical pipeline for densifying monocular SLAM by applying monocular depth prediction to construct a dense and metric 3D voxel map. Using voxels allows the efficient construction and maintenance of the map through raycasting, and allows for volumetric multi-view fusion. Finally, we propose a scale recovery procedure that uses the sparse and metric depth estimates of SLAM to refine the predicted dense depth maps. Our approach has been evaluated on conventional benchmarks and shows promising results for practical applications
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Mainsant, Marion. "Apprentissage continu sous divers scénarios d'arrivée de données : vers des applications robustes et éthiques de l'apprentissage profond". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS045.

Texto completo
Resumen
Le cerveau humain reçoit en continu des informations en provenance de stimuli externes. Il a alors la capacité de s’adapter à de nouvelles connaissances tout en conservant une mémoire précise de la connaissance apprise par le passé. De plus en plus d’algorithmes d’intelligence artificielle visent à apprendre des connaissances à la manière d’un être humain. Ils doivent alors être mis à jour sur des données variées arrivant séquentiellement et disponibles sur un temps limité. Cependant, un des verrous majeurs de l’apprentissage profond réside dans le fait que lors de l’apprentissage de nouvelles connaissances, les anciennes sont quant-à-elles perdues définitivement, c’est ce que l’on appelle « l’oubli catastrophique ». De nombreuses méthodes ont été proposées pour répondre à cette problématique, mais celles-ci ne sont pas toujours applicables à une mise en situation réelle car elles sont construites pour obtenir les meilleures performances possibles sur un seul scénario d’arrivée de données à la fois. Par ailleurs, les meilleures méthodes existant dans l’état de l’art sont la plupart du temps ce que l’on appelle des méthodes à « rejeu de données » qui vont donc conserver une petite mémoire du passé, posant ainsi un problème dans la gestion de la confidentialité des données ainsi que dans la gestion de la taille mémoire disponible.Dans cette thèse, nous proposons d’explorer divers scénarios d’arrivée de données existants dans la littérature avec, pour objectif final, l’application à la reconnaissance faciale d’émotion qui est essentielle pour les interactions humain-machine. Pour cela nous présenterons l’algorithme Dream Net – Data-Free qui est capable de s’adapter à un vaste nombre de scenarii d’arrivée des données sans stocker aucune donnée passée. Cela lui permet donc de préserver la confidentialité des données apprises. Après avoir montré la robustesse de cet algorithme comparé aux méthodes existantes de l’état de l’art sur des bases de données classiques de la vision par ordinateur (Mnist, Cifar-10, Cifar-100 et Imagenet-100), nous verrons qu’il fonctionne également sur des bases de données de reconnaissance faciale d’émotions. En s’appuyant sur ces résultats, nous proposons alors un démonstrateur embarquant l’algorithme sur une carte Nvidia Jetson nano. Enfin nous discuterons la pertinence de notre approche pour la réduction des biais en intelligence artificielle ouvrant ainsi des perspectives vers une IA plus robuste et éthique
The human brain continuously receives information from external stimuli. It then has the ability to adapt to new knowledge while retaining past events. Nowadays, more and more artificial intelligence algorithms aim to learn knowledge in the same way as a human being. They therefore have to be able to adapt to a large variety of data arriving sequentially and available over a limited period of time. However, when a deep learning algorithm learns new data, the knowledge contained in the neural network overlaps old one and the majority of the past information is lost, a phenomenon referred in the literature as catastrophic forgetting. Numerous methods have been proposed to overcome this issue, but as they were focused on providing the best performance, studies have moved away from real-life applications where algorithms need to adapt to changing environments and perform, no matter the type of data arrival. In addition, most of the best state of the art methods are replay methods which retain a small memory of the past and consequently do not preserve data privacy.In this thesis, we propose to explore data arrival scenarios existing in the literature, with the aim of applying them to facial emotion recognition, which is essential for human-robot interactions. To this end, we present Dream Net - Data-Free, a privacy preserving algorithm, able to adapt to a large number of data arrival scenarios without storing any past samples. After demonstrating the robustness of this algorithm compared to existing state-of-the-art methods on standard computer vision databases (Mnist, Cifar-10, Cifar-100 and Imagenet-100), we show that it can also adapt to more complex facial emotion recognition databases. We then propose to embed the algorithm on a Nvidia Jetson nano card creating a demonstrator able to learn and predict emotions in real-time. Finally, we discuss the relevance of our approach for bias mitigation in artificial intelligence, opening up perspectives towards a more ethical AI
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Yu, Yao-Tsung y 于耀宗. "Design of 4-Channel AI/AO Module Based on Embedded System". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/45742025538110093118.

Texto completo
Resumen
碩士
聖約翰科技大學
電子工程系碩士班
102
Analog output (AO) and analog input (AI) are necessary and important equipment in industry control areas. In many automatic control applications, such as light, temperature/humidity and motor control, AI modules are often used to process the analog signals captured by the external sensors, and AO modules are applied to several devices requiring analog signal control. According to one of the often-used AI/AO specifications for industry control, voltage: 0~5V and current: 0~20mA, a 4-channel AI/AO module based on embedded system is designed and implemented in this thesis. Due to the consideration of stability, the embedded system (M-502) with built-in Linux is used as a control platform, and its local bus is the interface to control the AI/AO module. In addition to 4 stand-alone AI and AO channels, this AI/AO module also employs the isolated protection to ensure that no mutual interference exists between external devices and internal circuits within the module. Finally, the experimental results show that this designed 4-channel AI/AO module can successfully work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía