Littérature scientifique sur le sujet « Edge artificial intelligence »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Edge artificial intelligence ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Edge artificial intelligence"

1

Deng, Shuiguang, Hailiang Zhao, Weijia Fang, Jianwei Yin, Schahram Dustdar et Albert Y. Zomaya. « Edge Intelligence : The Confluence of Edge Computing and Artificial Intelligence ». IEEE Internet of Things Journal 7, no 8 (août 2020) : 7457–69. http://dx.doi.org/10.1109/jiot.2020.2984887.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Edwards, Chris. « Shrinking artificial intelligence ». Communications of the ACM 65, no 1 (janvier 2022) : 12–14. http://dx.doi.org/10.1145/3495562.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhou, Zhi, Xu Chen, En Li, Liekang Zeng, Ke Luo et Junshan Zhang. « Edge Intelligence : Paving the Last Mile of Artificial Intelligence With Edge Computing ». Proceedings of the IEEE 107, no 8 (août 2019) : 1738–62. http://dx.doi.org/10.1109/jproc.2019.2918951.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Songlin Chen, Songlin Chen, Hong Wen Songlin Chen et Jinsong Wu Hong Wen. « Artificial Intelligence Based Traffic Control for Edge Computing Assisted Vehicle Networks ». 網際網路技術學刊 23, no 5 (septembre 2022) : 989–96. http://dx.doi.org/10.53106/160792642022092305007.

Texte intégral
Résumé :
<p>Edge computing supported vehicle networks have attracted considerable attention in recent years both from industry and academia due to their extensive applications in urban traffic control systems. We present a general overview of Artificial Intelligence (AI)-based traffic control approaches which focuses mainly on dynamic traffic control via edge computing devices. A collaborative edge computing network embedded in the AI-based traffic control system is proposed to process the massive data from roadside sensors to shorten the real-time response time, which supports efficient traffic control and maximizes the utilization of computing resources in terms of incident levels associated with different rescue schemes. Furthermore, several open research issues and indicated future directions are discussed.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sathish. « Artificial Intelligence based Edge Computing Framework for Optimization of Mobile Communication ». Journal of ISMAC 2, no 3 (9 juillet 2020) : 160–65. http://dx.doi.org/10.36548/jismac.2020.3.004.

Texte intégral
Résumé :
For improving the mobile service quality and acceleration of content delivery, edge computing techniques have been providing optimal solution to bridge the device requirements and cloud capacity by network edges. The advancements of technologies like edge computing and mobile communication has contributed greatly towards these developments. The mobile edge system is enabled with Machine Learning techniques in order to improve the edge system intelligence, optimization of communication, caching and mobile edge computing. For this purpose, a smart framework is developed based on artificial intelligence enabling reduction of unwanted communication load of the system as well as enhancement of applications and optimization of the system dynamically. The models can be trained more accurately using the learning parameters that are exchanged between the edge nodes and the collaborating devices. The adaptivity and cognitive ability of the system is enhanced towards the mobile communication system despite the low learning overhead and helps in attaining a near optimal performance. The opportunities and challenges of smart systems in the near future are also discussed in this paper.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Michael, James Bret. « Security and Privacy for Edge Artificial Intelligence ». IEEE Security & ; Privacy 19, no 4 (juillet 2021) : 4–7. http://dx.doi.org/10.1109/msec.2021.3078304.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yoon, Young Hyun, Dong Hyun Hwang, Jun Hyeok Yang et Seung Eun Lee. « Intellino : Processor for Embedded Artificial Intelligence ». Electronics 9, no 7 (18 juillet 2020) : 1169. http://dx.doi.org/10.3390/electronics9071169.

Texte intégral
Résumé :
The development of computation technology and artificial intelligence (AI) field brings about AI to be applied to various system. In addition, the research on hardware-based AI processors leads to the minimization of AI devices. By adapting the AI device to the edge of internet of things (IoT), the system can perform AI operation promptly on the edge and reduce the workload of the system core. As the edge is influenced by the characteristics of the embedded system, implementing hardware which operates with low power in restricted resources on a processor is necessary. In this paper, we propose the intellino, a processor for embedded artificial intelligence. Intellino ensures low power operation based on optimized AI algorithms and reduces the workload of the system core through the hardware implementation of a neural network. In addition, intellino’s dedicated protocol helps the embedded system to enhance the performance. We measure intellino performance, achieving over 95% accuracy, and verify our proposal with an field programmable gate array (FPGA) prototyping.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hu, Gang, et Bo Yu. « Artificial Intelligence and Applications ». Journal of Artificial Intelligence and Technology 2, no 2 (5 avril 2022) : 39–41. http://dx.doi.org/10.37965/jait.2022.0102.

Texte intégral
Résumé :
Artificial intelligence and machine-learning are widely applied in all domain applications, including computer vision and natural language processing (NLP). We briefly discuss the development of edge detection, which plays an important role in representing the salience features in a wide range of computer vision applications. Meanwhile, transformer-based deep models facilitate the usage of NLP application. We introduce two ongoing research projects for pharmaceutical industry and business negotiation. We also selected five papers in the related areas for this journal issue.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Foukalas, Fotis, et Athanasios Tziouvaras. « Edge Artificial Intelligence for Industrial Internet of Things Applications : An Industrial Edge Intelligence Solution ». IEEE Industrial Electronics Magazine 15, no 2 (juin 2021) : 28–36. http://dx.doi.org/10.1109/mie.2020.3026837.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Debauche, Olivier, Meryem Elmoulat, Saïd Mahmoudi, Sidi Ahmed Mahmoudi, Adriano Guttadauria, Pierre Manneback et Frédéric Lebeau. « Towards Landslides Early Warning System With Fog - Edge Computing And Artificial Intelligence** ». Journal of Ubiquitous Systems and Pervasive Networks 15, no 02 (1 mars 2021) : 11–17. http://dx.doi.org/10.5383/juspn.15.02.002.

Texte intégral
Résumé :
Landslides are phenomena that cause significant human and economic losses. Researchers have investigated the prediction of high landslides susceptibility with various methodologies based upon statistical and mathematical models, in addition to artificial intelligence tools. These methodologies allow to determine the areas that could present a serious risk of landslides. Monitoring these risky areas is particularly important for developing an Early Warning Systems (EWS). As matter of fact, the variety of landslides’ types make their monitoring a sophisticated task to accomplish. Indeed, each landslide area has its own specificities and potential triggering factors; therefore, there is no single device that can monitor all types of landslides. Consequently, Wireless Sensor Networks (WSN) combined with Internet of Things (IoT) allow to set up large-scale data acquisition systems. In addition, recent advances in Artificial Intelligence (AI) and Federated Learning (FL) allow to develop performant algorithms to analyze this data and predict early landslides events at edge level (on gateways). These algorithms are trained in this case at fog level on specific hardware. The novelty of the work proposed in this paper is the integration of Federated Learning based on Fog-Edge approaches to continuously improve prediction models.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Edge artificial intelligence"

1

Antonini, Mattia. « From Edge Computing to Edge Intelligence : exploring novel design approaches to intelligent IoT applications ». Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texte intégral
Résumé :
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Antonini, Mattia. « From Edge Computing to Edge Intelligence : exploring novel design approaches to intelligent IoT applications ». Doctoral thesis, Università ; degli studi di Trento, 2021. http://hdl.handle.net/11572/308630.

Texte intégral
Résumé :
The Internet of Things (IoT) has deeply changed how we interact with our world. Today, smart homes, self-driving cars, connected industries, and wearables are just a few mainstream applications where IoT plays the role of enabling technology. When IoT became popular, Cloud Computing was already a mature technology able to deliver the computing resources necessary to execute heavy tasks (e.g., data analytic, storage, AI tasks, etc.) on data coming from IoT devices, thus practitioners started to design and implement their applications exploiting this approach. However, after a hype that lasted for a few years, cloud-centric approaches have started showing some of their main limitations when dealing with the connectivity of many devices with remote endpoints, like high latency, bandwidth usage, big data volumes, reliability, privacy, and so on. At the same time, a few new distributed computing paradigms emerged and gained attention. Among all, Edge Computing allows to shift the execution of applications at the edge of the network (a partition of the network physically close to data-sources) and provides improvement over the Cloud Computing paradigm. Its success has been fostered by new powerful embedded computing devices able to satisfy the everyday-increasing computing requirements of many IoT applications. Given this context, how can next-generation IoT applications take advantage of the opportunity offered by Edge Computing to shift the processing from the cloud toward the data sources and exploit everyday-more-powerful devices? This thesis provides the ingredients and the guidelines for practitioners to foster the migration from cloud-centric to novel distributed design approaches for IoT applications at the edge of the network, addressing the issues of the original approach. This requires the design of the processing pipeline of applications by considering the system requirements and constraints imposed by embedded devices. To make this process smoother, the transition is split into different steps starting with the off-loading of the processing (including the Artificial Intelligence algorithms) at the edge of the network, then the distribution of computation across multiple edge devices and even closer to data-sources based on system constraints, and, finally, the optimization of the processing pipeline and AI models to efficiently run on target IoT edge devices. Each step has been validated by delivering a real-world IoT application that fully exploits the novel approach. This paradigm shift leads the way toward the design of Edge Intelligence IoT applications that efficiently and reliably execute Artificial Intelligence models at the edge of the network.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Abernot, Madeleine. « Digital oscillatory neural network implementation on FPGA for edge artificial intelligence applications and learning ». Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS074.

Texte intégral
Résumé :
Au cours des dernières décennies, la multiplication des objets embarqués dans de nombreux domaines a considérablement augmenté la quantité de données à traiter et la complexité des tâches à résoudre, motivant l'émergence d'algorithmes probabilistes d'apprentissage tels que l'intelligence artificielle (IA) et les réseaux de neurones artificiels (ANN). Cependant, les systèmes matériels pour le calcul embarqué basés sur l'architecture von Neuman ne sont pas efficace pour traiter cette quantité de données. C'est pourquoi des paradigmes neuromorphiques dotés d'une mémoire distribuée sont étudiés, s'inspirant de la structure et de la représentation de l'information des réseaux de neurones biologiques. Dernièrement, la plupart de la recherche autour des paradigmes neuromorphiques ont exploré les réseaux de neurones à impulsion ou spiking neural networks (SNNs), qui s'inspirent des impulsions utilisées pour transmettre l'information dans les réseaux biologiques. Les SNNs encodent l'information temporellement à l'aide d'impulsions pour assurer un calcul de données continues naturel et à faible énergie. Récemment, les réseaux de neurones oscillatoires (ONN) sont apparu comme un paradigme neuromorphique alternatif pour du calcul temporel, rapide et efficace, à basse consommation. Les ONNs sont des réseaux d'oscillateurs couplés qui émulent les propriétés de calcul collectif des zones du cerveau par le biais d'oscillations. Les récentes implémentations d'ONN combinées à l'émergence de composants compacts à faible consommation d'énergie encouragent le développement des ONNs pour le calcul embarqué. L’état de l’art de l'ONN le configure comme un réseau de Hopfield oscillatoire (OHN) avec une architecture d’oscillateurs entièrement couplés pour effectuer de la reconnaissance de formes avec une précision limitée. Cependant, le grand nombre de synapses de l'architecture limite l’implémentation de larges ONNs et le champs des applications de l'ONN. Cette thèse se concentre pour étudier si et comment l'ONN peut résoudre des applications significatives d'IA embarquée à l'aide d'une preuve de concept de l'ONN implémenté en digital sur FPGA. Tout d'abord, ce travail explore de nouveaux algorithmes d'apprentissages pour OHN, non supervisé et supervisé, pour améliorer la précision et pour intégrer de l'apprentissage continu sur puce. Ensuite, cette thèse étudie de nouvelles architectures pour l'ONN en s'inspirant des architectures en couches des ANNs pour créer dans un premier temps des couches d'OHN en cascade puis des réseaux ONN multi-couche. Les nouveaux algorithmes d'apprentissage et les nouvelles architectures sont démontrées avec l'ONN digital pour des applications d'IA embarquée, telles que pour la robotique avec de l'évitement d'obstacles et pour le traitement d'images avec de la reconnaissance de formes, de la détection de contour, de l'extraction d'amers, ou de la classification
In the last decades, the multiplication of edge devices in many industry domains drastically increased the amount of data to treat and the complexity of tasks to solve, motivating the emergence of probabilistic machine learning algorithms with artificial intelligence (AI) and artificial neural networks (ANNs). However, classical edge hardware systems based on von Neuman architecture cannot efficiently handle this large amount of data. Thus, novel neuromorphic computing paradigms with distributed memory are explored, mimicking the structure and data representation of biological neural networks. Lately, most of the neuromorphic paradigm research has focused on Spiking neural networks (SNNs), taking inspiration from signal transmission through spikes in biological networks. In SNNs, information is transmitted through spikes using the time domain to provide a natural and low-energy continuous data computation. Recently, oscillatory neural networks (ONNs) appeared as an alternative neuromorphic paradigm for low-power, fast, and efficient time-domain computation. ONNs are networks of coupled oscillators emulating the collective computational properties of brain areas through oscillations. The recent ONN implementations combined with the emergence of low-power compact devices for ONN encourage novel attention over ONN for edge computing. State-of-the-art ONN is configured as an oscillatory Hopfield network (OHN) with fully coupled recurrent connections to perform pattern recognition with limited accuracy. However, the large number of OHN synapses limits the scalability of ONN implementation and the ONN application scope. The focus of this thesis is to study if and how ONN can solve meaningful AI edge applications using a proof-of-concept of the ONN paradigm with a digital implementation on FPGA. First, it explores novel learning algorithms for OHN, unsupervised and supervised, to improve accuracy performances and to provide continual on-chip learning. Then, it studies novel ONN architectures, taking inspiration from state-of-the-art layered ANN models, to create cascaded OHNs and multi-layer ONNs. Novel learning algorithms and architectures are demonstrated with the digital design performing edge AI applications, from image processing with pattern recognition, image edge detection, feature extraction, or image classification, to robotics applications with obstacle avoidance
Styles APA, Harvard, Vancouver, ISO, etc.
4

Hasanaj, Enis, Albert Aveler et William Söder. « Cooperative edge deepfake detection ». Thesis, Jönköping University, JTH, Avdelningen för datateknik och informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53790.

Texte intégral
Résumé :
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
Styles APA, Harvard, Vancouver, ISO, etc.
5

WoldeMichael, Helina Getachew. « Deployment of AI Model inside Docker on ARM-Cortex-based Single-Board Computer : Technologies, Capabilities, and Performance ». Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17267.

Texte intégral
Résumé :
IoT has become tremendously popular. It provides information access, processing and connectivity for a huge number of devices or sensors. IoT systems, however, often do not process the information locally, rather send the information to remote locations in the Cloud. As a result, it adds huge amount of data traffic to the network and additional delay to data processing. The later feature might have significant impact on applications that require fast response times, such as sophisticated artificial intelligence (AI) applications including Augmented reality, face recognition, and object detection. Consequently, edge computing paradigm that enables computation of data near the source has gained a significant importance in achieving a fast response time in the recent years. IoT devices can be employed to provide computational resources at the edge of the network near the sensors and actuators. The aim of this thesis work is to design and implement a kind of edge computing concept that brings AI models to a small embedded IoT device by the use of virtualization concepts. The use of virtualization technology enables the easy packing and shipping of applications to different hardware platforms. Additionally, this enable the mobility of AI models between edge devices and the Cloud. We will implement an AI model inside a Docker container, which will be deployed on a FireflyRK3399 single-board computer (SBC). Furthermore, we will conduct CPU and memory performance evaluations of Docker on Firefly-RK3399. The methodology adopted to reach to our goal is experimental research. First, different literatures have been studied to demonstrate by implementation the feasibility of our concept. Then we setup an experiment that covers measurement of performance metrics by applying synthetic load in multiple scenarios. Results are validated by repeating the experiment and statistical analysis. Results of this study shows that, an AI model can successfully be deployed and executed inside a Docker container on Arm-Cortex-based single-board computer. A Docker image of OpenFace face recognition model is built for ARM architecture of the Firefly SBC. On the other hand, the performance evaluation reveals that the performance overhead of Docker in terms of CPU and Memory is negligible. The research work comprises the mechanisms how AI application can be containerized in ARM architecture. We conclude that the methods can be applied to containerize software application in ARM based IoT devices. Furthermore, the insignificant overhead brought by Docker facilitates for deployment of applications inside a container with less performance overhead. The functionality of IoT device i.e. Firefly-RK3399 is exploited in this thesis. It is shown that the device is capable and powerful and gives an insight for further studies.
Styles APA, Harvard, Vancouver, ISO, etc.
6

PELUSO, VALENTINO. « Optimization Tools for ConvNets on the Edge ». Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2845792.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Laroui, Mohammed. « Distributed edge computing for enhanced IoT devices and new generation network efficiency ». Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7078.

Texte intégral
Résumé :
Dans le cloud computing, les services et les ressources sont centralisés dans des centres de données auxquels l’utilisateur peut accéder à partir de ses appareils connectés. L’infrastructure cloud traditionnelle sera confrontée à une série de défis en raison de la centralisation de calcul, du stockage et de la mise en réseau dans un petit nombre de centres de données, et de la longue distance entre les appareils connectés et les centres de données distants. Pour répondre à ce besoin, l’edge computing s’appuie sur un modèle dans lequel les ressources de calcul sont distribuées dans le edge de réseau selon les besoins, tout en décentralisant le traitement des données du cloud vers le edge autant que possible. Ainsi, il est possible d’avoir rapidement des informations exploitables basées sur des données qui varient dans le temps. Dans cette thèse, nous proposons de nouveaux modèles d’optimisation pour optimiser l’utilisation des ressources dans le edge de réseau pour deux domaines de recherche de l’edge computing, le "service offloading" et "vehicular edge computing". Nous étudions différents cas d’utilisation dans chaque domaine de recherche. Pour les solutions optimales, Premièrement, pour le "service offloading", nous proposons des algorithmes optimaux pour le placement des services dans les serveurs edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) en tenant compte des contraintes de ressources de calcul. De plus, pour "vehicular edge computing", nous proposons des modèles exacts liés à la maximisation de la couverture des véhicules par les taxis et les Unmanned Aerial Vehicle (UAV) pour les applications de streaming vidéo en ligne. De plus, nous proposons un edge- autopilot VNFs offloading dans le edge de réseau pour la conduite autonome. Les résultats de l’évaluation montrent l’efficacité des algorithmes proposés dans les réseaux avec un nombre limité d’appareils en termes de temps, de coût et d’utilisation des ressources. Pour faire face aux réseaux denses avec un nombre élevé d’appareils et des problèmes d’évolutivité, nous proposons des algorithmes à grande échelle qui prennent en charge une énorme quantité d’appareils, de données et de demandes d’utilisateurs. Des algorithmes heuristiques sont proposés pour l’orchestration SFC, couverture maximale des serveurs edge mobiles (véhicules). De plus, les algorithmes d’intelligence artificielle (apprentissage automatique, apprentissage en profondeur et apprentissage par renforcement en profondeur) sont utilisés pour le placement des "5G VNF slices", le placement des "VNF-autopilot" et la navigation autonome des drones. Les résultats numériques donnent de bons résultats par rapport aux algorithmes exacts avec haute efficacité en temps
Traditional cloud infrastructure will face a series of challenges due to the centralization of computing, storage, and networking in a small number of data centers, and the long-distance between connected devices and remote data centers. To meet this challenge, edge computing seems to be a promising possibility that provides resources closer to IoT devices. In the cloud computing model, compute resources and services are often centralized in large data centers that end-users access from the network. This model has an important economic value and more efficient resource-sharing capabilities. New forms of end-user experience such as the Internet of Things require computing resources near to the end-user devices at the network edge. To meet this need, edge computing relies on a model in which computing resources are distributed to the edge of a network as needed, while decentralizing the data processing from the cloud to the edge as possible. Thus, it is possible to quickly have actionable information based on data that varies over time. In this thesis, we propose novel optimization models to optimize the resource utilization at the network edge for two edge computing research directions, service offloading and vehicular edge computing. We study different use cases in each research direction. For the optimal solutions, First, for service offloading we propose optimal algorithms for services placement at the network edge (Tasks, Virtual Network Functions (VNF), Service Function Chain (SFC)) by taking into account the computing resources constraints. Moreover, for vehicular edge computing, we propose exact models related to maximizing the coverage of vehicles by both Taxis and Unmanned Aerial Vehicle (UAV) for online video streaming applications. In addition, we propose optimal edge-autopilot VNFs offloading at the network edge for autonomous driving. The evaluation results show the efficiency of the proposed algorithms in small-scale networks in terms of time, cost, and resource utilization. To deal with dense networks with a high number of devices and scalability issues, we propose large-scale algorithms that support a huge amount of devices, data, and users requests. Heuristic algorithms are proposed for SFC orchestration, maximum coverage of mobile edge servers (vehicles). Moreover, The artificial intelligence algorithms (machine learning, deep learning, and deep reinforcement learning) are used for 5G VNF slices placement, edge-autopilot VNF placement, and autonomous UAV navigation. The numerical results give good results compared with exact algorithms with high efficiency in terms of time
Styles APA, Harvard, Vancouver, ISO, etc.
8

MAZZIA, VITTORIO. « Machine Learning Algorithms and their Embedded Implementation for Service Robotics Applications ». Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2968456.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Labouré, Iooss Marie-José. « Faisabilité d'une carte électronique d'opérateurs de seuillage : déformation d'objets plans lors de transformations de type morphologique ». Saint-Etienne, 1987. http://www.theses.fr/1987STET4014.

Texte intégral
Résumé :
Etude de la segmentation d'une image et plus particulièrement du seuillage d'une image. Classification de formes planes. De nombreux algorithmes sont présentés. La plupart sont fondés sur la connaissance de l'histogramme des niveaux de gris. Une carte électronique de seuillage a été développée. Des méthodes originales de détection de contours sont aussi explicitées. Dans une deuxième partie, une étude sur les déformations d'objets plans après dilatations successives est présentée
Styles APA, Harvard, Vancouver, ISO, etc.
10

Busacca, Fabio Antonino. « AI for Resource Allocation and Resource Allocation for AI : a two-fold paradigm at the network edge ». Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573371.

Texte intégral
Résumé :
5G-and-beyond and Internet of Things (IoT) technologies are pushing a shift from the classic cloud-centric view of the network to a new edge-centric vision. In such a perspective, the computation, communication and storage resources are moved closer to the user, to the benefit of network responsiveness/latency, and of an improved context-awareness, that is, the ability to tailor the network services to the live user's experience. However, these improvements do not come for free: edge networks are highly constrained, and do not match the resource abundance of their cloud counterparts. In such a perspective, the proper management of the few available resources is of crucial importance to improve the network performance in terms of responsiveness, throughput, and power consumption. However, networks in the so-called Age of Big Data result from the dynamic interactions of massive amounts of heterogeneous devices. As a consequence, traditional model-based Resource Allocation algorithms fail to cope with this dynamic and complex networks, and are being replaced by more flexible AI-based techniques as a result. In such a way, it is possible to design intelligent resource allocation frameworks, able to quickly adapt to the everchanging dynamics of the network edge, and to best exploit the few available resources. Hence, Artificial Intelligence (AI), and, more specifically Machine Learning (ML) techniques, can clearly play a fundamental role in boosting and supporting resource allocation techniques at the edge. But can AI/ML benefit from optimal Resource Allocation? Recently, the evolution towards Distributed and Federated Learning approaches, i.e. where the learning process takes place in parallel at several devices, has brought important advantages in terms of reduction of the computational load of the ML algorithms, in the amount of information transmitted by the network nodes, and in terms of privacy. However, the scarceness of energy, processing, and, possibly, communication resources at the edge, especially in the IoT case, calls for proper resource management frameworks. In such a view, the available resources should be assigned to reduce the learning time, while also keeping an eye on the energy consumption of the network nodes. According to this perspective, a two-fold paradigm can emerge at the network edge, where AI can boost the performance of Resource Allocation, and, vice versa, optimal Resource Allocation techniques can speed up the learning process of AI algorithms. Part I of this work of thesis explores the first topic, i.e. the usage of AI to support Resource Allocation at the edge, with a specific focus on two use-cases, namely UAV-assisted cellular networks, and vehicular networks. Part II deals instead with the topic of Resource Allocation for AI, and, specifically, with the case of the integration between Federated Learning techniques and the LoRa LPWAN protocol. The designed integration framework has been validated on both simulation environments, and, most importantly, on the Colosseum platform, the biggest channel emulator in the world.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Edge artificial intelligence"

1

Vermesan, Ovidiu, et Dave Marples. Advancing Edge Artificial Intelligence. New York : River Publishers, 2024. http://dx.doi.org/10.1201/9781003478713.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Srivatsa, Mudhakar, Tarek Abdelzaher et Ting He, dir. Artificial Intelligence for Edge Computing. Cham : Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-40787-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Misra, Sanjay, Amit Kumar Tyagi, Vincenzo Piuri et Lalit Garg, dir. Artificial Intelligence for Cloud and Edge Computing. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-80821-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Blondie24 : Playing at the edge of AI. San Francisco, Calif : Morgan Kaufmann, 2002.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Curtis, Anthony R. Space almanac : Facts, figures, names, dates, places, lists, charts, tables, maps covering space from earth to the edge of the universe. Woodsboro, Md : Arcsoft, 1989.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Shi, Yong. Cutting-Edge Research Topics on Multiple Criteria Decision Making : 20th International Conference, MCDM 2009, Chengdu/Jiuzhaigou, China, June 21-26, 2009. Proceedings. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Edges of reality : Mind vs. computer. New York : Insight Books, 1996.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Mobile Edge Artificial Intelligence. Elsevier, 2022. http://dx.doi.org/10.1016/c2020-0-00624-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Cutting-Edge Artificial Intelligence. Lerner Publishing Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Cutting-Edge Artificial Intelligence. Lerner Publishing Group, 2018.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Edge artificial intelligence"

1

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Fundamentals of Artificial Intelligence ». Dans Edge AI, 33–47. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Artificial Intelligence Applications on Edge ». Dans Edge AI, 51–63. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Artificial Intelligence Inference in Edge ». Dans Edge AI, 65–76. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Artificial Intelligence Training at Edge ». Dans Edge AI, 77–95. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Edge Computing for Artificial Intelligence ». Dans Edge AI, 97–115. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wang, Xiaofei, Yiwen Han, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan et Xu Chen. « Artificial Intelligence for Optimizing Edge ». Dans Edge AI, 117–34. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-6186-3_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Purice, Dinu, Francesco Barchi, Thorsten Röder et Claus Lenz. « Edge AI Lifecycle Management ». Dans Advancing Edge Artificial Intelligence, 43–63. New York : River Publishers, 2024. http://dx.doi.org/10.1201/9781003478713-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Xu, Lamei. « Designing Blended Learning Activities in the Era of Artificial Intelligence ». Dans Edge Computing – EDGE 2023, 37–45. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51826-3_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Meng, Fanrong, Wei Lin et Zhixiao Wang. « Space Edge Detection Based SVM Algorithm ». Dans Artificial Intelligence and Computational Intelligence, 656–63. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23896-3_81.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tsolkas, Dimitris, Harilaos Koumaras, Anastasios-Stavros Charismiadis et Andreas Foteas. « Artificial Intelligence in 5G and Beyond Networks ». Dans Applied Edge AI, 73–103. Boca Raton : Auerbach Publications, 2022. http://dx.doi.org/10.1201/9781003145158-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Edge artificial intelligence"

1

Goswami, Siddharth, et Sachin Sharma. « DNA Sequencing using Artificial Intelligence ». Dans 2022 International Conference on Edge Computing and Applications (ICECAA). IEEE, 2022. http://dx.doi.org/10.1109/icecaa55415.2022.9936101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Dong, Daniel Zhang, Yang Zhang, Md Tahmid Rashid, Lanyu Shang et Na Wei. « Social Edge Intelligence : Integrating Human and Artificial Intelligence at the Edge ». Dans 2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI). IEEE, 2019. http://dx.doi.org/10.1109/cogmi48466.2019.00036.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Glavan, Alina Florina, et Constantin Viorel Marian. « Cognitive edge computing through artificial intelligence ». Dans 2020 13th International Conference on Communications (COMM). IEEE, 2020. http://dx.doi.org/10.1109/comm48946.2020.9142010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kum, Seungwoo, Youngkee Kim, Domenico Siracusa et Jaewon Moon. « Artificial Intelligence Service Architecture for Edge Device ». Dans 2020 IEEE 10th International Conference on Consumer Electronics (ICCE-Berlin). IEEE, 2020. http://dx.doi.org/10.1109/icce-berlin50680.2020.9352184.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Rawat, Yash, Yash Gupta, Garima Khothari, Amit Mittal et Devendra Rautela. « The Role of Artificial Intelligence in Biometrics ». Dans 2023 2nd International Conference on Edge Computing and Applications (ICECAA). IEEE, 2023. http://dx.doi.org/10.1109/icecaa58104.2023.10212224.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Yang, Bo. « Spatial Intelligence in Edge Cognitive Computing ». Dans 2023 IEEE Conference on Artificial Intelligence (CAI). IEEE, 2023. http://dx.doi.org/10.1109/cai54212.2023.00024.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Manmatha, R. « Edge Detection To Subpixel Accuracy ». Dans Applications of Artificial Intelligence V, sous la direction de John F. Gilmore. SPIE, 1987. http://dx.doi.org/10.1117/12.940627.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Qu, Jingwei, Haibin Ling, Chenrui Zhang, Xiaoqing Lyu et Zhi Tang. « Adaptive Edge Attention for Graph Matching with Outliers ». Dans Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California : International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/134.

Texte intégral
Résumé :
Graph matching aims at establishing correspondence between node sets of given graphs while keeping the consistency between their edge sets. However, outliers in practical scenarios and equivalent learning of edge representations in deep learning methods are still challenging. To address these issues, we present an Edge Attention-adaptive Graph Matching (EAGM) network and a novel description of edge features. EAGM transforms the matching relation between two graphs into a node and edge classification problem over their assignment graph. To explore the potential of edges, EAGM learns edge attention on the assignment graph to 1) reveal the impact of each edge on graph matching, as well as 2) adjust the learning of edge representations adaptively. To alleviate issues caused by the outliers, we describe an edge by aggregating the semantic information over the space spanned by the edge. Such rich information provides clear distinctions between different edges (e.g., inlier-inlier edges vs. inlier-outlier edges), which further distinguishes outliers in the view of their associated edges. Extensive experiments demonstrate that EAGM achieves promising matching quality compared with state-of-the-arts, on cases both with and without outliers. Our source code along with the experiments is available at https://github.com/bestwei/EAGM.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Banjanović-Mehmedović, Lejla, et Anel Husaković. « Edge AI : Reshaping the Future of Edge Computing with Artificial Intelligence ». Dans BASIC TECHNOLOGIES AND MODELS FOR IMPLEMENTATION OF INDUSTRY 4.0. Academy of Sciences and Arts of Bosnia and Herzegovina, 2023. http://dx.doi.org/10.5644/pi2023.209.07.

Texte intégral
Résumé :
This paper highlights the growing importance of edge computing and the need for AI techniques to enable intelligent processing at the edge. Edge computing has emerged as a paradigm shift that brings data processing and storage closer to the source, minimizing the need for transmitting large volumes of data to remote locations. The integration of AI capabilities at the edge enables intelligent and real-time decisionmaking on resource-constrained devices. This paper discusses the significance of Edge AI across various domains, including automotive applications, smart homes, industrial IoT, and healthcare. By leveraging AI algorithms on edge devices, efficient implementation and deployment become possible, leading to improved latency, privacy, and security.The various AI techniques used in edge computing are presented, including machine learning, deep learning, reinforcement learning and transfer learning. As AI continues to play a pivotal role in driving edge computing, the integration of hardware accelerators and software platforms is gaining utmost significance to efficiently run inference models. A variety of popular options have emerged to accelerate AI at the edge, and notable among them are NVIDIA Jetson, Intel Movidius Myriad X, and Google Coral Edge TPU. The importance of specialized System-on-a-Chip (SoC) solutions for Edge AI, capable of supporting high-performance video, voice, and vision processing alongside integrated AI accelerators is presented as well. By examining the transformative potential of Edge AI, this paper aims to inspire researchers, practitioners, and industry professionals to explore the vast possibilities of integrating AI at the edge. With Edge AI reshaping the future of edge computing, intelligent decision-making becomes seamlessly integrated into our daily lives, driving advancements across various sectors.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Khare, Aryan, Ujjwal Kumar Singh, Samta Kathuria, Shaik Vaseem Akram, Manish Gupta et Navjot Rathor. « Artificial Intelligence and Blockchain for Copyright Infringement Detection ». Dans 2023 2nd International Conference on Edge Computing and Applications (ICECAA). IEEE, 2023. http://dx.doi.org/10.1109/icecaa58104.2023.10212277.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Edge artificial intelligence"

1

Hwang, Tim, et Emily Weinstein. Decoupling in Strategic Technologies : From Satellites to Artificial Intelligence. Center for Security and Emerging Technology, juillet 2022. http://dx.doi.org/10.51593/20200085.

Texte intégral
Résumé :
Geopolitical tensions between the United States and China have sparked an ongoing dialogue in Washington about the phenomenon of “decoupling”—the use of public policy tools to separate the multifaceted economic ties that connect the two powers. This issue brief provides a historical lens on the efficacy of one specific aspect of this broader decoupling phenomenon: using export controls and related trade policies to prevent a rival from acquiring the equipment and know-how to catch up to the United States in cutting-edge, strategically important technologies.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Perdigão, Rui A. P. Course on Nonlinear Frontiers : From Dynamical Systems, Information and Complexity to Cutting-Edge Physically Cognitive Artificial Intelligence. Meteoceanics, février 2021. http://dx.doi.org/10.46337/uc.210211.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hunt, Will, et Owen Daniels. Sustaining and Growing the U.S. Semiconductor Advantage : A Primer. Center for Security and Emerging Technology, juin 2022. http://dx.doi.org/10.51593/20220006.

Texte intégral
Résumé :
As an integral player in advanced semiconductor supply chains, the United States enjoys advantages over China in producing and accessing chips for artificial intelligence and other leading-edge computing technologies. However, a lack of domestic production capacity threatens U.S. semiconductor access. The United States can strengthen its advantages by working with allies and partners to prevent China from producing leading-edge chips and by reshoring its own domestic chipmaking capacity.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Cary, Dakota. China’s CyberAI Talent Pipeline. Center for Security and Emerging Technology, juillet 2021. http://dx.doi.org/10.51593/2020ca017.

Texte intégral
Résumé :
To what extent does China’s cultivation of talent in cybersecurity and AI matter in terms of competitiveness with other countries? Right now, it seems to have an edge: China’s 11 World-Class Cybersecurity Schools offer more classes on artificial intelligence and machine learning than do the 20 U.S. universities certified as Centers of Academic Excellence in Cyber Operations. This policy brief recommends tracking 13 research grants from the National Science Foundation that attempt to integrate AI into cybersecurity curricula.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Chahal, Husanjot, Helen Toner et Ilya Rahkovsky. Small Data's Big AI Potential. Center for Security and Emerging Technology, septembre 2021. http://dx.doi.org/10.51593/20200075.

Texte intégral
Résumé :
Conventional wisdom suggests that cutting-edge artificial intelligence is dependent on large volumes of data. An overemphasis on “big data” ignores the existence—and underestimates the potential—of several AI approaches that do not require massive labeled datasets. This issue brief is a primer on “small data” approaches to AI. It presents exploratory findings on the current and projected progress in scientific research across these approaches, which country leads, and the major sources of funding for this research.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Luong, Ngor, Rebecca Gelles et Melissa Flagg. Mapping the AI Investment Activities of Top Global Defense Companies. Center for Security and Emerging Technology, octobre 2021. http://dx.doi.org/10.51593/20210015.

Texte intégral
Résumé :
Militaries around the world have often relied on the largest global defense companies to acquire and integrate cutting-edge technologies. This issue brief examines the investment and mergers and acquisition activities in artificial intelligence of the top 50 global defense companies — a key, if limited, approach to accessing AI innovation in the commercial sector — and assesses investment trends of their corporate venture capital subsidiaries and offers a geographic breakdown of defense companies and their AI target companies.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gehlhaus, Diana, et Santiago Mutis. The U.S. AI Workforce : Understanding the Supply of AI Talent. Center for Security and Emerging Technology, janvier 2021. http://dx.doi.org/10.51593/20200068.

Texte intégral
Résumé :
As the United States seeks to maintain a competitive edge in artificial intelligence, the strength of its AI workforce will be of paramount importance. In order to understand the current state of the domestic AI workforce, Diana Gehlhaus and Santiago Mutis define the AI workforce and offer a preliminary assessment of its size, composition, and key characteristics. Among their findings: The domestic supply of AI talent consisted of an estimated 14 million workers (or about 9% of total U.S. employment) as of 2018.
Styles APA, Harvard, Vancouver, ISO, etc.
8

David, Aharon. Controlling Aircraft—From Humans to Autonomous Systems : The Fading Humans. 400 Commonwealth Drive, Warrendale, PA, United States : SAE International, juillet 2023. http://dx.doi.org/10.4271/epr2023014.

Texte intégral
Résumé :
<div class="section abstract"><div class="htmlview paragraph">While being the first to fly, the Wright Brothers were also the first and last complete “one stop shop” of aviation: the only case in human flight in which the same individuals personally carried out the research, development, testing, manufacturing, operation, maintenance, air control, flight simulation, training, setup, operation, and more. Since then, these facets gradually fragmented and drifted away from the aircraft. This report discusses the phenomenon of aircraft operation’s “fading humans,” including the development of flight instruments to support it, its growing automation, the emerging artificial intelligence paradigm, and the lurking cyber threats that all over the place.</div><div class="htmlview paragraph"><b>Controlling Aircraft – From Humans to Autonomous Systems: The Fading Humans</b> examines the “fading” process itself, including its safety aspects, current mitigation efforts, ongoing research, and the unsettled topics that still remain.</div><div class="htmlview paragraph"><a href="https://www.sae.org/publications/edge-research-reports" target="_blank">Click here to access the full SAE EDGE</a><sup>TM</sup><a href="https://www.sae.org/publications/edge-research-reports" target="_blank"> Research Report portfolio.</a></div></div>
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ruvinsky, Alicia, Timothy Garton, Daniel Chausse, Rajeev Agrawal, Harland Yu et Ernest Miller. Accelerating the tactical decision process with High-Performance Computing (HPC) on the edge : motivation, framework, and use cases. Engineer Research and Development Center (U.S.), septembre 2021. http://dx.doi.org/10.21079/11681/42169.

Texte intégral
Résumé :
Managing the ever-growing volume and velocity of data across the battlefield is a critical problem for warfighters. Solving this problem will require a fundamental change in how battlefield analyses are performed. A new approach to making decisions on the battlefield will eliminate data transport delays by moving the analytical capabilities closer to data sources. Decision cycles depend on the speed at which data can be captured and converted to actionable information for decision making. Real-time situational awareness is achieved by locating computational assets at the tactical edge. Accelerating the tactical decision process leverages capabilities in three technology areas: (1) High-Performance Computing (HPC), (2) Machine Learning (ML), and (3) Internet of Things (IoT). Exploiting these areas can reduce network traffic and shorten the time required to transform data into actionable information. Faster decision cycles may revolutionize battlefield operations. Presented is an overview of an artificial intelligence (AI) system design for near-real-time analytics in a tactical operational environment executing on co-located, mobile HPC hardware. The report contains the following sections, (1) an introduction describing motivation, background, and state of technology, (2) descriptions of tactical decision process leveraging HPC problem definition and use case, and (3) HPC tactical data analytics framework design enabling data to decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Khan, Samir. Towards MRO 4.0 : Challenges for Digitalization and Mapping Emerging Technologies. 400 Commonwealth Drive, Warrendale, PA, United States : SAE International, avril 2023. http://dx.doi.org/10.4271/epr2023007.

Texte intégral
Résumé :
<div class="section abstract"><div class="htmlview paragraph">With technological breakthroughs in electric land vehicles revolutionizing their respective industry, maintenance, repair, and overhaul (MRO) facilities in aviation are also adopting digital technologies in their practices. But despite this drive towards digitalization, the industry is still dominated by manual labor and subjective assessments. Today, several technologies, processes, and practices are being championed to resolve some of these outstanding challenges. Considering this, it is important to present current perspectives regarding where the technology stands today and how we can evaluate capabilities for autonomous decision support systems that prescribe maintenance activities. Overlooking some of these unsettled domain issues can potentially undermine any benefits in speed, process, and resilience promised by such systems.</div><div class="htmlview paragraph"><b>Towards MRO 4.0: Challenges for Digitalization and Mapping Emerging Technologies</b> provides some understanding of specific motivating factors by focusing on the digitalization challenges for MRO 4.0 and the role of building “trust” in technology by reimagining stakeholder experiences. It examines overarching issues, such as data management, robotics, optimization, artificial intelligence, and systems engineering.</div><div class="htmlview paragraph"><a href="https://www.sae.org/publications/edge-research-reports" target="_blank">Click here to access the full SAE EDGE</a><sup>TM</sup><a href="https://www.sae.org/publications/edge-research-reports" target="_blank"> Research Report portfolio.</a></div></div>
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie